title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing | Accept (poster) | Summary: This paper proposes BLIP-Diffusion, a new subject-driven image generation model with multimodal encoder that supports multimodal control which consumes inputs of subject images and text prompts.
Strengths: The proposed method is novel and enables subject-driven generation under efficient fine-tuning and zero-shot setups.
The proposed model can serve as a foundation model and combined with previous methods such as prompt-to-prompt and controlnet.
Solid experimental results and ablations.
Weaknesses: The contribution is a bit lacking. It seems to me that the work is just a combination and the multimodal encoder from CLIP and the t2i model.
The improvement under efficient fine-tuning setup seems to be incremental.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I am curious about the contribution of the proposed pretraining approach. How's the performance for the naive pretraining strategy (keep the same image as both input to the multimodal encoder and output to the diffusion mode.)?
How do you get the random background image for generating synthetic image?
For finetuning compared with Dreambooth, does it take the same time for each timestep?
What's the inference time cost of the proposed method compared with standard stable diffusion?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for confirming the novelty of our approach. We address questions below.
-------
**Q1**: contribution of the work?
**A1**: We summarize the scientific significance of BLIP-Diffusion as below:
- **BLIP-Diffusion represents a novel approach to subject-driven generation using multimodal encoder**. Previous work (DreamBooth, Textual Inversion) learns subject embeddings via inversion. Our approach using multimodal encoders represents a novel and generic technique that has proved more efficient than inversion. In addition, our approach can also benefit from the advancement of multimodal vision-language foundation models, offering better potentials for stronger subject-driven generative capabilities;
- **BLIP-Diffusion highlights a new two-staged pre-training strategy for category-generic subject-driven generation**. The multimodal representation learning stage harvests the high-quality text-aligned visual features. The subject representation learning stage includes a novel pre-training task prompted context generation, ensuring the subject visuals and text prompt can well coordinate for generation. Both stages are category-generic and require no domain-specific annotations, which make BLIP-Diffusion stand out from concurrent work.
- **Zero-shot subject-driven generative capabilities are unprecedented**. Zero-shot generation with highly-customized and category-generic subjects is a challenging task. Such zero-shot capabilities were not available in prior models. We enable this novel capability via the newly introduced subject representation learning stage, which represents a significant advancement in this domain per se.
BLIP-Diffusion features a foundational architecture that enables versatile applications. Different from existing work, our model’s generative capabilities are showcased in multiple applications, including generation, editing, geometry-guided generation, image manipulation/stylization (see supplementary) and subject interpolation (see supplementary). This demonstrates the flexibility of our model and its potential to serve as a foundation subject-driven generation model.
- **BLIP-Diffusion demonstrates preferable generation results while offering significant speed-up in finetuning**. Specifically, our model fine-tunes 20x more efficiently than DreamBooth. This effectively reduces fine-tuning time per subject from 5-10 minutes (500-1000 fine-tuning steps) to sub-minute (50-100 fine-tuning steps). This has important implications on applications where fine-tuning efficiency matters, such as multimodal dialogues.
- We provide quantitative evaluation results on public datasets with category-generic subjects, which validate effectiveness of the model. Our model will be open-sourced for researchers and practitioners for reproducing our results and findings.
--------
**Q2**: Performance if using the same image for multimodal encoder and target.
**A2**: As described in Ln 142, this setup leads to trivial solutions where the image is directly copied. In our experiments, the resultant model always reproduces the input image, failing to address text prompts.
This observation also echoes with the findings as reported by DreamBooth, where the authors propose additional regularization methods to counteract the issue. Their proposed regularization technique, however, does not easily scale up for scalable pre-training.
In this regard, with input background replaced, we effectively avoid such optimization shortcuts and enforce the model to learn to condition on both subject visuals and text prompt for the generation.
----
**Q3**: How do you get the random background images?
**A3**: We download 59K images from the royalty-free photo stock website by querying for background and landscape images, as these images usually contain less salient or distracting subjects.
----
**Q4**: For finetuning compared with DreamBooth, does it take the same time for each timestep?
**A4**: **Yes, our model takes the same time for each iteration as DreamBooth**.
As described in section 3.3, we only fine-tune U-Net to specialize for custom subjects. Subject embeddings are pre-computed at the cost of merely one forward pass with negligible time needed, thus no multimodal encoder is needed during fine-tuning.
To get a better idea of the wall-clock time cost, on DreamBooth Benchmark, DreamBooth models on average take 6-10 minutes and fine-tune for 600-1000 steps, while our model costs less than a minute for around 80 steps on average.
------
**Q5**: What's the inference time cost of the proposed method compared with standard stable diffusion?
**A5**: **Inference of BLIP-Diffusion is as efficient as standard stable diffusion models**.
For zero-shot inference, only one additional pass of multimodal encoder is needed to initialize the subject embedding. This cost is negligible compared to the iterative denoising step.
For inference with fine-tuned model checkpoints, subject embeddings are loaded as part of the model. No additional inference cost is needed.
------
**Q6**: Improvement under efficient fine-tuning seems incremental.
**A6**: Fine-tuning aims to learn better subject visuals for highly customized subjects. This consequently leads to better subject alignment scores, i.e. DINO, CLIP-I as indicated in Table 1. Such results demonstrate the effectiveness of fine-tuning.
In contrast, our fine-tuning setup does not explicitly optimize for prompt understanding, thus showing no significant effect on CLIP-T score, which is naturally expected. While our work does not emphasize on the design of fine-tuning procedure, it may be also possible to integrate better calibrated fine-tuning and inference methods (e.g. [1]).
We will revise the manuscript to highlight this discussion.
[1] Key-Locked Rank One Editing for Text-to-Image Personalization, Tewel, Yoad and Gal, Rinon and Chechik, Gal and Atzmon, Yuval, SIGGRAPH 2023.
-------
Hope the response addresses the questions.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for authors' detailed responses. My concerns are mostly addressed. But I am still not quit sure about the efficiency improvement. As said, the "Subject embeddings are pre-computed" and the model takes the same time for each iteration. It is unclear to me why the model takes much less steps to fine-tune. Is the comparison fair? Are you using the same backbone as dreambooth? Now I keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your question and please kindly find our response below.
As the title highlights, the improved fine-tuning efficiency comes from the “pre-trained subject representation”. This was motivated by the inefficiency of DreamBooth as described in LN 27, which we reiterate here:
“*We attribute such inefficiency to the fact that most pre-trained text-to-image models do not natively support multimodal control - using both images and texts as control input. As a result, it becomes challenging to learn subject representation that aligns with the text space while capturing the subject visuals with high fidelity*”.
Technically, DreamBooth always initializes fine-tuning **from a randomly initialized embedding**. In contrast, our model intializes fine-tuning **from subject embeddings produced by multimodal encoder**, which already capture largely the subject visuals even before the fine-tuning (evidenced by the zero-shot results). This results in a much smaller visual gap to learn during fine-tuning, which translates to less fine-tuning steps required.
And yes, we use Stable Diffusion as described in LN 217. The DreamBooth model we compared to are with the same backbone.
Definitely please let us know if more clarification is required and we are more than happy to further address remaining concerns.
Thanks. | Summary: This paper aims to solve the subject-driven text-to-image generation with a pre-trained subject representation that is derived from a vision-language encoder, the BLIP2 model. The obtained subject representation captures rich information of the visual input while being aligned with the textual space. The text-conditioned diffusion model is pre-trained to produce images based on this subject representation and the text input. The derived diffusion model can produce novel impressive renditions of a given subject with respect to different text prompts, in both zero-shot and few-step finetuned scenarios. Beyond that, the paper presents fancy applications like controlled generations and subject-driven stylization. Compared to the state-of-the-art approaches, the proposed BLIP-Diffusion demonstrates preferable subject-driven generation results both qualitatively and quantitatively while offering significant speed-up.
Strengths: The paper explicitly leverages the text-aligned image feature and the diffusion model trained on this subject representation is able to generate subject renditions with compelling quality. The novel combination with the BLIP2 model forms an effective solution to subject-driven text-to-image generation. The method presented is neat and elegant.
Moreover, I like many technical details authors propose to achieve the final quality. For example, authors carefully design a synthetic procedure to form the synthetic training pairs to tackle the tendency to produce trivial solutions. Also, it is found that randomly dropping the subject prompt with some probability is beneficial to text-to-image generation.
The paper demonstrates fancy applications in various applications and presents impressive generation results. the significant speed-up for the subject-driven generation makes it promising in practical usage. The quantitative study further proves the superiority of the method over prior leading approaches.
Weaknesses: - One possible way to explain the method is that this paper uses a detailed text prompt instead of a rough description like "an image of [V]" as used in Dreambooth. That is, the features captured from the BLIP2 may not be the key. One simpler baseline is "Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models". The paper should mention such baseline and add more discussions regarding this.
- To avoid the distraction of the background, the authors propose to replace the background of the target image with random background. Then why not just remove the background during training?
- One primary usage of subject-driven text-to-image models is to generate images related to humans. However, the paper avoids this scenario by explicitly removing human-related images during training, why is that? It is suggested to have a discussion regarding this, otherwise, it is suspicious that the method does not perform well on portraits.
- While the paper qualitatively measures the fidelity to the input image and the text prompt respectively, there is no measure for image quality. While in the chosen samples of Figure 6 the BLIP-Diffusion shows apparent quality advantage over prior arts, it still needs a quantitative measure to reflect the image quality on a large amount of image results.T
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: It is suggested to provide more technical details for the style transfer part. Does it need some special prompt design when specifying styles according to the input image?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitation is sufficiently discussed in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for confirming the technical depth and empirical values of our work. In the following, we provide response to answer reviewer's questions.
------
**Q1**: paper uses a detailed text prompt instead of a rough description like "an image of [V]" as used in Dreambooth. the features captured from the BLIP2 may not be the key; discussion on E4T.
**A1**: We use BLIP-2 captions for pre-training. While BLIP-2 captions are more detailed than template prompts “an image of [V]”, they are not sufficient to describe the exact subject visuals. Therefore, having subject embeddings is crucial in our model.
To provide evidence for this claim, we kindly refer the reviewer to Table 2, which shows that the generation performance improves with better subject embeddings. This highlights the importance of subject embedding on the generation quality. Qualitatively, in Figure 7, we show that subject embeddings learn meaningful global and local subject visuals. This further supports the effectiveness of subject embeddings.
We are aware of the E4T work and have also referenced it in LN 74. It is worth highlighting that BLIP-Diffusion is category-generic, and can be applied to any customized subjects. This is supported by our quantitative evaluation on the DreamBooth datasets. In contrast, E4T pre-trains the model on domain-specific datasets, e.g. cats, thus cannot generalize easily to out-of-domain subjects.
We are happy to include an expanded discussion on E4T in the revised version.
-----
**Q2**: Why not just remove the background during training?
**A2**: We thank the reviewer for this insightful question.
This will impose additional constraints on the input subject image to also have an empty background during fine-tuning and inference. As a result, it would most likely require additional procedures for background removal and introduce external reliance, which we consider suboptimal. In contrast, in our setup, the model is able to automatically identify the subject from the input image after pre-training, even with distracting background scenes.
-------
**Q3**: Application on portraits.
**A3**: We thank the reviewer for raising this issue. While we acknowledge the practical values of portrait generation, we purposely avoid human-related generation for training and evaluation. This is mainly to abide by the code of ethics of the conference (https://neurips.cc/public/EthicsGuidelines), and was also motivated by related NeurIPS work [1]. With our due respect, we humbly request the reviewer to kindly share our concern.
As such, we would expect the model checkpoint along this submission to not perform well for human portraits. However, we note that the proposed two-staged pre-training strategy is category-generic. Interested readers may opt to resume pre-training on domain-specific datasets to tailor for their own application scenarios.
[1] PASS: An ImageNet replacement for self-supervised pretraining without humans, Yuki M. Asano, Christian Rupprecht, Andrew Zisserman, Andrea Vedaldi
------
**Q4**: Quantitative image quality measure.
**A4**: We thank the reviewer for the suggestion. We recognize that measuring image quality quantitatively for subject-driven text-to-image generation is an open research question. And to the best of us, few prior work presents relevant results.
As such, we propose to use the aesthetic scorer (https://github.com/LAION-AI/aesthetic-predictor) to measure the visual quality of the generated images, which was used to select high-quality image data used for training state-of-the-art diffusion models.
The aesthetic scorer predicts an aesthetic score bounded from 0 to 10 using CLIP ViT-L/14 features, where 10 is the highest aesthetic score. To get a better understanding of the score, we reference the description from LAION project page (https://laion.ai/blog/laion-aesthetics/) that out of the LAION 5-billion images:
- 1.2B (24%) images have scores 4.5+;
- 12M (0.24%) images have scores 6+;
- 3M (0.06%) images have scores 6.25+;
- 625K (0.01%) images have scores 6.5+;
We calculate the aesthetic scores for 3000 images of our model and compare with those generated from DreamBooth. We use prompts and subjects from the DreamBooth datasets. The results are shown as below.
| Setups | Aesthetic Scores |
| ----------- | ----------- |
| BLIP-Diffusion (fine-tuned) | 6.50 |
| BLIP-Diffusion (zero-shot) | 6.43 |
| DreamBooth | 6.20 |
**These results show that**:
- BLIP-Diffusion produces better image quality than DreamBooth. We attribute this to the two-staged pre-training strategy which helps to better align subject embeddings and text embeddings;
- BLIP-Diffusion after fine-tuning produces top 0.01% quality images when compared with the 5B internet image corpora; while DreamBooth quality is among top 0.24%. This shows a clear quality advantage of our model.
- Fine-tuning helps produce higher quality images for our model;
We will include this result into the supplementary material during revision.
------
We thank the reviewer for appreciating our work and hope the response addresses the questions. | Summary: The paper introduces "BLIP-Diffusion", a new subject-driven image generation model that supports multimodal control using subject images and text prompts. The model introduces a pre-trained multimodal encoder to provide subject representation and enables zero-shot subject-driven generation and efficient fine-tuning for customized subjects. This model can be combined with existing techniques to enable novel subject-driven generation and editing applications.
Strengths: 1. Originality: The paper introduces a novel subject-driven image generation model, BLIP-Diffusion, which supports multimodal control using subject images and text prompts. This model is original in its approach as it combines a pre-trained multimodal encoder for subject representation, enabling zero-shot subject-driven generation and efficient fine-tuning for customized subjects.
2. Quality: The quality of the paper is evident in the detailed explanation of the model and the comprehensive experiments conducted to validate its performance. The paper includes qualitative results that demonstrate the model's capabilities, such as zero-shot subject-driven generation and high-fidelity fine-tuning. The model also shows high subject fidelity and prompt relevance, requiring significantly fewer fine-tuning steps compared to other methods.
3. Clarity: The paper is well-structured and clear in its presentation. The authors provide a thorough explanation of the model, its implementation, and the experiments conducted. The use of figures and tables further enhances the clarity of the paper, providing visual representations of the model's performance and capabilities.
4. Significance: The significance of the paper lies in its contribution to the field of image generation. The BLIP-Diffusion model presents a new approach to subject-driven image generation, offering potential for novel subject-driven generation and editing applications. The model's ability to perform zero-shot subject-driven generation and efficient fine-tuning for customized subjects is a significant advancement in this field.
Weaknesses: Limited Zero-Shot Performance and Dependence on Fine-Tuning: The paper claims that the proposed BLIP-Diffusion model can perform zero-shot rendering of images across various categories of subjects. However, the results presented do not fully substantiate this claim. Both qualitatively and quantitatively, the zero-shot results are not as impressive as one might expect. Furthermore, the model's performance seems to heavily rely on fine-tuning. While fine-tuning is a common practice in machine learning, the extent to which the model depends on it raises questions about its practicality and efficiency. The necessity of fine-tuning to achieve good results could be seen as a limitation, especially in scenarios where rapid or on-the-fly generation is required. This dependence on fine-tuning could limit the model's applicability and ease of use in certain contexts.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Clarification on Zero-Shot Performance: The paper claims that the BLIP-Diffusion model can perform zero-shot rendering of images across various categories of subjects. However, the results presented do not fully substantiate this claim. Could the authors provide more evidence or examples to support this claim?
2. Editability Issue: The paper suggests that using trained background replaced subject images can address the editability issue. However, it seems that this approach might only deal with recontextualization, not subject area editing. Could the authors justify this approach and explain how it addresses the editability issue?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The paper's main claim is that the proposed BLIP-Diffusion model can perform zero-shot rendering of images across various categories of subjects. However, the results presented do not fully substantiate this claim. Both qualitatively and quantitatively, the zero-shot results are not as impressive as one might expect. Furthermore, the model's performance seems to heavily rely on fine-tuning. While fine-tuning is a common practice in machine learning, the extent to which the model depends on it raises questions about its practicality and efficiency. The necessity of fine-tuning to achieve good results could be seen as a limitation, especially in scenarios where rapid or on-the-fly generation is required. This dependence on fine-tuning could limit the model's applicability and ease of use in certain contexts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for confirming the originality, quality, clarity and significance of our work. We provide response to reviewer's question below.
-----
**Q1**: Zero-shot performance and its applicability.
**A1**: As described in Section 3, BLIP-Diffusion is the very first model to unlock the zero-shot subject-driven generation capabilities for generic categories. The significance in its zero-shot capability is summarized below:
1. **Zero-shot subject-driven generative capabilities are unprecedented**. Zero-shot generation with highly-customized and category-generic subjects is a challenging task. Such zero-shot capabilities were not available in prior models. We enable this novel capability via the newly introduced subject representation learning stage, which represents a significant advancement in this domain per se.
2. **Zero-shot subject-driven generation quality outperforms some recent fine-tuning methods**. Quantitatively, Table 1 shows that the zero-shot result outperforms significantly the recent fine-tuning based model Textual Inversion (TI). In particular, BLIP-Diffusion achieves zero-shot DINO score 0.594 (versus 0.569 for TI), and CLIP-T score 0.300 (versus 0.255 for TI). This demonstrates the competitive performance of the zero-shot generation with the prior art.
3. **Whether fine-tuning is needed depends on the application**. Apart from subject-driven generation, BLIP-Diffusion also facilitates various additional zero-shot applications. We kindly refer the reviewer to the supplementary material Figure 3, Figure 4 for more examples, where we show results using BLIP-Diffusion for stylization / image manipulation. Specifically, our model can use the appearance style of an input subject to guide the generation. In this application, despite no fine-tuning being performed, our model produces visually appealing and creative generations. In light of this, we consider that fine-tuning is not an absolute dependency, and its necessity can be determined based on the application.
While being able to generate and edit in the zero-shot fashion is an appealing capability, it is important to underscore the novel concept of pre-trained subject representation and the two-staged pre-training strategy. In this regard, we consider the zero-shot capabilities of BLIP-Diffusion as compelling evidence of the effectiveness of our pre-training approach.
-----
**Q2**: Fine-tuning limits the ease of use in certain contexts.
**A2**: This is precisely the motivation of introducing pre-trained subject representation into the text-to-image generation model. With such pre-trained representation, BLIP-Diffusion significantly improves the subject-driven fine-tuning efficiency by up to 20 times, compared to the state-of-the-art model DreamBooth. This effectively reduces fine-tuning time per subject from 5-10 minutes (500-1000 fine-tuning steps) to sub-minute (50-100 fine-tuning steps). Whilst potential further improvement is possible, we consider BLIP-Diffusion clearly superior in terms of fine-tuning efficiency than leading solutions.
-----
**Q3**: Justification of the prompted context generation pre-training task with background replacement.
**A3**: We appreciate the reviewer's question on how background replacement helps subject representation learning. The referred “edibility” requires (1) transferring of subject visuals; (2) guided generation according to text prompts. Our background replacement fulfills both requirements. Even though the pre-training only recontextualizes the subject, the model can generalize to other text prompts and directly edit the subject. More technically, the prompted context generation task learns a joint subject-text space, which allows any text prompt to interact with the subject visuals.
-----
We hope the response addresses reviewer's questions. We will revise the manuscript accordingly for better clarity.
---
Rebuttal Comment 1.1:
Comment: Hi reviewer, we appreciate your time and effort in providing reviews and we have provided rebuttal accordingly.
Does the rebuttal address your concerns? Please kindly let us know for remaining feedback. | Summary: The paper addresses the issues of lengthy fine-tuning and preserving the subject fidelity in subject-driven text-to-image generation models. Different from existing models such as Textual Inversion and Dreambooth that invert subject visuals into text embedding space, the paper introduces a new multimodal encoder which is pre-trained to produce visual representation aligned with the text. New subject renditions are then generated using such visual representation. The paper highlights the utility of the proposed approach in zero-shot subject-driven generation and editing applications, and demonstrates 20x speedup in fine-tuning for customized subjects.
Strengths: The paper is well-motivated and well-written. The key idea of the work to deeply align subject embedding and the text embedding is interesting. The proposed model is compatible with ControlNet and Prompt-to-Prompt, which has potential to unleash several important editing capabilities. The experiment results are solid, with sufficient evaluations.
Weaknesses: I see the key novelty of the proposed approach in section 3.2 where the paper introduces the subject representation learning. The paper mentions that output of the BLIP-2 multimodal encoder is passed to CLIP Text Encoder by combining the text and subject embeddings. I suggest authors elaborate this to provide more concrete details. We get embeddings as output from CLIP Text Encoder right? How is it possible to combine them before passing as input to CLIP Text Encoder. I might be missing something here, would be great if the paper clarifies this in rebuttal. Also, it is not clear if the paper additionally finetunes the CLIP Text encoder. In lines 164-164, it is mentioned that the Text encoder is also fine-tuned. But wouldn’t that bring language drift issues? Are there any specific strategies used by the paper in fine-tuning text encoder that aid in avoiding the language drift problem? Please clarify. Also, in section 3.2, how many synthetic pairs are used? I am happy to revise my final rating based on the clarifications in the rebuttal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for confirming the novelty of our approach and the comprehensive results. We address reviewer's question as below.
-----
**Q1**: “We get embeddings as output from CLIP Text Encoder right? How is it possible to combine them before passing as input to CLIP Text Encoder ”
**A1**: Thanks for the question. As described in Section 3.2, we use subject embeddings from BLIP encoder as soft visual prompts. Specifically, given a text prompt,
we first pass text tokens through the CLIP embedding layer, thus obtaining text token embeddings;
we then concatenate subject embeddings and the text token embeddings;
the combined embeddings are passed to subsequent CLIP layers, i.e. the positional embedding layer, then the self-attention layers.
We will revise LN131-136 to make this more clarified.
-----
**Q2**: “It is mentioned that the text encoder is also fine-tuned. But wouldn’t that bring language drift issues?”
**A2**: As referred to by DreamBooth paper, language drift causes the model to “associate the class name with the specific instance”, due to fine-tuning the CLIP text encoder which overfits to the subject appearance.
Different from DreamBooth, our model uses a subject embedding produced by a new BLIP encoder. The CLIP text encoder is tasked to align the subject embedding with the text prompt, rather than learning the subject appearance. Therefore, we reduce the risk of the text encoder overfitting to specific subject instances. Furthermore, we give captions to the text encoder during training to preserve its language capability.
Empirically, we do not observe the issue of language drift. As a qualitative evidence, in the subject editing application (Figure 5, #9-10), our model generates images well-aligned with the text either with subject embeddings (images after editing), or with text embeddings only (images before editing). This is also evidenced quantitatively by Table 1, which shows that our model produces comparable or better text alignment than prior work.
-----
**Q3**: “How many synthetic pairs are used”?
**A3**: We use a subset of 292K images from OpenImage-V6, as described in Section 214, Ln-214. In the meanwhile, please kindly find the detailed description of the selection criterion in the supplementary material A.5. We are happy to expand the description in the revised version for better clarity.
-------
**Below, please allow us to reiterate the significance of our work**.
- **BLIP-Diffusion represents a novel approach to subject-driven generation using multimodal encoder**. Previous work (DreamBooth, Textual Inversion) learns subject embeddings via inversion. Our approach using multimodal encoders represents a novel and generic technique that has proved more efficient than inversion. In addition, our approach can also benefit from the advancement of multimodal vision-language foundation models, offering better potentials for stronger subject-driven generative capabilities;
- **BLIP-Diffusion highlights a new two-staged pre-training strategy for category-generic subject-driven generation**. The multimodal representation learning stage harvests the high-quality text-aligned visual features. The subject representation learning stage includes a novel pre-training task prompted context generation, ensuring the subject visuals and text prompt can well coordinate for generation. Both stages are category-generic and require no domain-specific annotations, which make BLIP-Diffusion stand out from concurrent work.
- **Zero-shot subject-driven generative capabilities are unprecedented**. Zero-shot generation with highly-customized and category-generic subjects is a challenging task. Such zero-shot capabilities were not available in prior models. We enable this novel capability via the newly introduced subject representation learning stage, which represents a significant advancement in this domain per se.
BLIP-Diffusion features a foundational architecture that enables versatile applications. Different from existing work, our model’s generative capabilities are showcased in multiple applications, including generation, editing, geometry-guided generation, image manipulation/stylization (see supplementary) and subject interpolation (see supplementary). This demonstrates the flexibility of our model and its potential to serve as a foundation subject-driven generation model.
- **BLIP-Diffusion demonstrates preferable generation results while offering significant speed-up in finetuning**. Specifically, our model fine-tunes 20x more efficiently than DreamBooth. This effectively reduces fine-tuning time per subject from 5-10 minutes (500-1000 fine-tuning steps) to sub-minute (50-100 fine-tuning steps). This has important implications on applications where fine-tuning efficiency matters, such as multimodal dialogues.
We provide quantitative evaluation results on public datasets with category-generic subjects, which validate effectiveness of the model. Our model will be open-sourced for researchers and practitioners for reproducing our results and findings.
-----
**Q4**: I am happy to revise my final rating based on the clarifications in the rebuttal.
**A4**: We hope the response clarifies the questions. We humbly request the reviewer to re-evaluate the significance of our work based on the response above.
---
Rebuttal Comment 1.1:
Comment: Hi reviewer, we appreciate your time and effort in providing reviews and we have provided rebuttal accordingly.
Does the rebuttal address your concerns? Please kindly let us know for remaining feedback.
---
Rebuttal 2:
Comment: Thanks for the detailed responses to my questions on combining text and subject embeddings, language drift issues, and clarifying the number of synthetic pairs used. I updated my reviews. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Transformers over Directed Acyclic Graphs | Accept (poster) | Summary: The paper proposes an attention mechanism for directed acyclic graphs. Specifically the reachability to/from nodes is considered for a given node and the unreachable nodes are masked out. The authors use this attention mechanism in existing graph transformers for undirected graphs. Further the paper also proposes a new PE for directed graphs that considers the maximum depth of the node from the root nodes. The method is evaluated over source code and citation datasets and authors show it improves existing transformer performance.
Strengths: 1. The method achieves state of the art results when induced in transformer architectures on datasets used
2. Method is claimed to be parallelizable and results in faster training compared to asynchronous methods
Weaknesses: 1. Regarding limiting the receptive field of the transformer, the method selects nodes up to path length k. But a principle question arises about “How to select k?”. As of now it seems to be a heuristic or a dataset dependent parameter, which limits the novelty of the work as it is straightforward to apply attention in the top k hop reachable nodes in order to induce some notion of structure in the transformer.
2. Considering that the method only limits the attention to reachable nodes and uses topological depth as PE in existing methods the novelty and contribution seems limited.
3. The proposed attention mechanism that is restricted to reachable nodes, may suffer from learning issues when a node label depends on non-reachable nodes. For example if the label of a child node depends on sibling nodes but those are not reachable and the transformer used is of single layer. The same issue may arise with more layers etc. if there exists label dependence on farther non-reachable nodes. This is where the fundamental strength of the transformer lies over GNN that it doesn't need to wait over many layers for the information to be propagated. But with the proposed modification this property is lost.
Missing Citations:
1. https://proceedings.mlr.press/v162/dong22b.html
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Regarding limiting the receptive field of the transformer, the method selects nodes up to path length k. But a principle question arises about “How to select k?”. Can the authors comment on a more principled way of selecting k rather than keeping it a heuristic or a dataset dependent parameter?
2. In table 6, the results for the base transformer model and proposed method without DAGRA are the same in ogbg-code2. Is this because no PE is used and the rest of the architecture remains the same?
3. Do the authors see any issues in learning with the proposed attention mechanism that is restricted to reachable nodes? For example if the label of a child node depends on sibling nodes but those are not reachable and the transformer used is of single layer. The same issue may arise with more layers etc. if there exists label dependence on non-reachable nodes. This is where the fundamental strength of the transformer lies over GNN that it doesn't need to wait over many layers for the information to be propagated. But with the proposed modification this property is lost. Can the proposed framework be adapted to handle such cases?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: Limitations of the method have been addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for the constructive comments!**
We indeed missed to discuss the rather important Q3 in the paper and detail the actual power of our DAG attention in the global reply. There we also add details about the comparison to the reference mentioned. We are sorry for the confusion caused by the missing explanation and hope that our contributions are clearer now.
Please let us know if there are any questions left!
**Q1 Clarification: Receptive Field**
Note that we generally recommend $k=\infty$ (line 273), which means we consider all nodes related via the reachability relationship. This is a reasonable choice given our theoretical analysis, based on the proven random walks, and has also shown best performance in all our experiments. In fact, observe that our framework distinguishes itself in that it is easy to use, requiring no hyperparameter selection or tuning (see also WzDZ, Q1).
**Q2 Setting over ogbg-code2**
Yes, in ogbg-code2, no PE is used, the rest of the architecture remains the same. Similar to the transformer baselines we used, we did not see improvement with PE; see also lines 268-270 in the paper. We hypothesize that this is related to the underlying transformer and also the data, since there is a work using a PE [1], but they consider textual information beyond the graph recommended in the benchmark.
[1] Zügner et al. "Language-Agnostic Representation Learning of Source Code from Structure and Context." ICLR 2020.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks to the authors for the clarifications and it helps my understanding of the proposed method and contributions. I will think through the discussed points in detail and consider my review in light of the responses.
Many Thanks
---
Rebuttal 2:
Title: Response by Authors
Comment: Thank you for getting back to us and for considering our rebuttal in the final review!
---
Rebuttal 3:
Title: Author Question
Comment: Dear Reviewer xj5i,
We hope the clarifications in the rebuttal changed your view of our contribution to the positive side.
We are sorry for bothering you again! But since the competition is tough and you acknowledged our rebuttal, we wanted to make sure our work doesn't get forgotten.
---
Rebuttal Comment 3.1:
Comment: I have read the rebuttal and I want to express my appreciation for the response by authors. The response basically address my questions. Therefore, I will keep my rating unchanged so far and make the final decision until the reviewer discussion phase. Many Thanks!
---
Rebuttal 4:
Title: Final Comment by Authors
Comment: Thank you so much for your participation in the discussion and for mentioning it to us! | Summary: This paper adapts transformers to directed acyclic graphs. It restricts the receptive field of each node to its predecessor and successor nodes so that it faithfully captures the DAG structure. It also incorporates positional encodings based on the node depth. Extensive experiments show that it can improve performance of different kinds of baseline transformers.
Strengths: 1. The proposed method is efficient and versatile. It can improve performance of a broad spectrum of transformers.
2. The problem of adapting transformers to DAGs is under-explored.
3. The analysis and experimental results regarding $k$ is surprising and inspiring.
Weaknesses: 1. The datasets used in this paper are rather small. As efficiency and versatility are two key advantages claimed by the paper, can the authors include experimental results on larger graphs, like ogbn datasets?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. The authors mentioned that they used the same hyperparameters as baseline transformers. Does this include learning rate, weight decay and dropout? If so, one only needs to integrate the proposed module without searching any hyperparameters (as long as $k$ is large enough) and can immediately facilitate an enhancement in performance.
2. See weakness 1.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Most limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for the thoughtful comments!**
These are interesting points that highlight our contibution, and we included them into the paper.
**Q1 No Hyperparameter Tuning**
We indeed used the same hyperparameters as the baseline transformers, which include learning rate, weight decay, and dropout. Extensive experiments showed that integrating our proposed module without the need for hyperparameter search can lead to an immediate improvement in performance, and hence considerably ease usability.
**Q2 Larger Datasets**
So far, we focused on datasets which were considered with models tailored to DAGs or transformers in the past. Due to the limited time in the rebuttal period, we report only some numbers here, but we will conduct further experiments and also consider other transformers.
**ogbn-arxiv, Citation Data.** We removed a small number of cyclic citation links to create DAGs, used the official splits, averaged over 5 runs, and compared vanilla Transformer to similarly proven GNNs. The results demonstrate that we can adapt the transformers to large graphs and achieve competitive results.
| Methods | GCN | GraphSAGE | GPRGNN | Transformer | DAG+Transformer |
| ---------- | ---------------- | ---------------- | ---------------- | ----------- | ---------------- |
| ogbn-arxiv | 71.72 $\pm$ 0.45 | 71.46 $\pm$ 0.26 | 70.90 $\pm$ 0.23 | OOM | 71.53 $\pm$ 0.34 |
**MalNet-Tiny, Function Call Graphs (FCGs)[1].** 5k graphs with max. 5k nodes, originating from benign software or 4 malware types, these 5 types are to be predicted. We use the very same setting as GraphGPS, which employs the Local Degree Profile as the set of node features, and added DAG attention on top of GraphGPS by only modifying the self-attention module, switching it to DAG attention. Note that we did not do any hyperparameter tuning. DAG+GraphGPS achieves a slightly better test accuracy while only requiring 20 seconds per epoch. This confirms the quality and efficiency of DAG attention.
| Model | Accuracy (%) | Time(epoch) |
| ---------------- | -------------------- | ----------- |
| GraphGPS | 92.64 $\pm$ 0.78 | 46s |
| **DAG+GraphGPS** | **93.45 $\pm$ 0.41** | **20s** |
-----------------------------------
[1] Freitas, et al. "A large-scale database for graph representation learning." NeurIPS 2021
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the rebuttal, most of my concerns have been addressed.
---
Rebuttal 2:
Title: Thank you for initiating further interesting discussion!
Comment: **It is a good idea to point out that the focus of SOTA research [2,3] on regular graph transformers is very different from our proposal**, which shows that we can exploit the special structure of DAGs to considerably improve their effectiveness in this very specific - but sometimes important - setting.
We can certainly add the indeed proven method [1] as baseline. Please also note that we experimented with the shortest path distance from that paper to address U6X8, Q3.
Please note that, since there are many interesting SOTA graph transformers, our submission focused on GraphGPS and NodeFormer, which provided SOTA scores at the time when we were conducting our experiments. We'll run further experiments to include [3] and [3] + DAG into the tables now. **This could indeed nicely underline the fact that our framework's general nature allows for continuous and even subtle SOTA advances** (i.e., here, comparing NodeFormer and DIFFormer [3]). We'll update you here once the results are available!
---
Rebuttal Comment 2.1:
Title: Response to rebuttal
Comment: Allow me to clarify my previous point. My intention was simply to recommend incorporating references to these works within the related works section, thereby enhancing the comprehensiveness of the background. I understand that the rebuttal timeframe is limited, and my intention was not to burden you with additional efforts in introducing more baselines or applying the proposed method to new backbones. After all, you have already applied the proposed method to 3 backbones and I think it is enough. Nevertheless, in case circumstances permit, you could consider including the previously mentioned experimental outcomes in the final version. Thank you for the feedback.
---
Rebuttal 3:
Title: Thank you for clarifying!
Comment: We wanted to make sure we address *all* remaining concerns and the experiment is a valid proposal. Preliminary experiments on some datasets show that **our framework likely increases both quality and runtime here as well, sometimes considerably**.
| | Cora | | Citeseer | |
| :---------------- | :----------------- | :------------- | :----------------- | :------------- |
| | Accuracy | Train time (s) | Accuracy | Train time (s) |
| NodeFormer | 83.4 $\pm$ 0.2 | - | 73.0 $\pm$ 0.3 | - |
| DIFFormer-a | 84.1 $\pm$ 0.6 | 14.215 |75.7 $\pm$ 0.3 | 8.481 |
| DAG+DIFFormer-a | **85.1** $\pm$ 0.7 | **10.967** | **76.2** $\pm$ 0.4 | **7.015** |
Please note that we report the semi-supervised setting (different from our paper), since this is the one considered in the DIFFormer paper.
(edit: we found a bug in the pre-processing with Citeseer and adapted those results)
---
Rebuttal Comment 3.1:
Title: Response to rebuttal
Comment: Thanks for the additional results, which contribute to further substantiating the versatility of the proposed method. I have raised my rating to 6. However, I do have a minor question: why does the inclusion of DAG can save the training time?
---
Rebuttal 4:
Title: This is an important point
Comment: Thank you for asking!
Since we technically (though not effectively, see the global reply) restrict the attention to only the DAG's relationships, which are are usually sparse, and implement this using message-passing GNNs, **we obtain considerable performance increases generally, also during training**.
More precisely, the time complexity is reduced to $O(|V | × n_\infty × d)$; $n_\infty$ denotes the average size of the receptive field and is typically much smaller than |V|. See also Sec. 3.5. | Summary: The paper proposed a new approach for DAG representation learning using transformer. The representation learning on DGA is significant as DAG can be adapted into many real-world problems, which is also explained in the paper. In addition, the DAG can be formed into a sequence of nodes so that it is naturally to think about using transformers to model. The paper conducted comprehensive experiments to support its claim.
Strengths: The paper utilizes transformer to achieve representation learning on DAG, which is natural. Although transformer has been used on graphs, they are mostly for undirected graphs.
The paper tells the story in an easy-to-read way. It also conducts extensive experiments to show the superior performance of the proposed model.
The task the paper wants to solve is significant in that DAG can be naturally adapted to many real-world applications. This has also been discussed in the apper.
Weaknesses: (1) It seems that there is one paper that does the same thing [1]. I think this paper should at least be discussed in the paper.
(2) I was wondering why some models in Table 2 and Table 2 are not present in Table 4 and Table 5, such as PNA, DGANN.
(3) Will the choice of the root node affect the final results?
[1] Dong et al., PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs. 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been discussed by the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for the constructive comments!**
We indeed missed the PACE paper, please see the global reply for a detailed discussion and experimental results. Overall, the comparison to this transformer tailored to DAGs highlights the effectiveness of our simpler and more general proposal: PACE is already outperformed by vanilla Transformer+DAG.
We also clarified the below points in the paper.
**Q2 Choice of Baselines**
Since we used well-known datasets, we mostly resorted to the models usually reported with those, or in other DAG-specific papers. We did not run those by ourselves, but are happy to add some if you have recommendations. DAGNN specifically is hard to be run - if it runs at all - on the larger datasets since it is very expensive (e.g., it takes days for the ogbg-code2 experiments). This is also why we (and also the PACE paper) put such emphasis onto efficiency.
**Q3 Root Nodes are Given**
The term "root node" refers to the source nodes, i.e., nodes without predecessors. These are fixed, that is, they are given with the datset and do not represent architecture choices or parameters. Hence they do not affect the results.
---
Rebuttal Comment 1.1:
Title: Response to author
Comment: Thank the author for clarifying my concerns.
a. I still think that it's a bit weired if baselines in Table 2, 4 and 5 are different. Although this is a minor issue, I would recommend the author to make it equivalent for fair comparison and statement in the paper.
b. For the source node, how is it specified in the dataset, by random? I would also suggest the author to clarify this in their paper.
---
Rebuttal 2:
Title: Response by Authors
Comment: a. Please note that we did not intend to ignore the comparison, but rather did not want to give the impression that is reasonable to run DAGNN over larger graphs, especially in case where there are alternatives; observe that it is more than 100 times slower than GCN or our model. Nevertheless, this experiment reveals that our framework, even in combination with the most simple vanilla Transformer, provides a useful alternative and is **likely able to make any transformer competitive to SOTA networks tailored to DAGs**.
| | Cora | Citeseer | Pubmed | NA | |
| ----- | ---------------- | ---------------- | ---------------- | ----------------- | ----------------- |
| | Accuracy | Accuracy | Accuracy | RMSE | Pearson's r |
| PNA | 87.03 $\pm$ 0.73 | 73.11 $\pm$ 1.06 | 87.99 $\pm$ 0.26 | 0.691 $\pm$ 0.003 | 0.707 $\pm$ 0.001 |
| DAGNN | 84.49 $\pm$ 0.59 | **74.52 $\pm$ 0.67** | OOM | 0.264 $\pm$ 0.004 | 0.964 $\pm$ 0.001 |
|Transformer |75.92 $\pm$ 0.86| 72.23 $\pm$ 1.06 |OOM|0.285 $\pm$0.004 |0.957 $\pm$ 0.001 |
|**DAG+Transformer**| **87.80** $\pm$ 0.53| **74.42 $\pm$ 0.22** |**89.0** $\pm$ 0.13|**0.253** $\pm$ 0.002 |**0.966** $\pm$0.001 |
b. Sorry for not specifying this clearly, there is no randomness here and we use the same setting as the related works. More specifically, every dataset comes with the graphs in terms of nodes and edges and, by definition (and without any randomness), source nodes are the nodes where the indegree is zero. For instance, in python source code graphs, this might be nodes corresponding to a code token in the very beginning (e.g., "def") and, in citation networks, this are nodes corresponding to authors who cite others but who have not been cited yet.
---
Rebuttal Comment 2.1:
Title: Response to author
Comment: Thank the response from the author and my concerns are properly addressed.
---
Rebuttal 3:
Title: Thank you so much!
Comment: This final confirmation is very helpful and highly appreciated. | Summary: The paper proposing a new Transformer-based graph neural network for directed graphs (DAGs) that restricts the receptive field size of self-attention and adds depth-based node embeddings to improve learning from DAGs. The resulted model is more efficient than previous Graph Transformer models and at the same time more effective in different DAG tasks as shown in many experiments.
Strengths: 1. The problem of learning from DAGs is interesting and important. Improving Transformers for this problem is a reasonable direction.
2. The introduced improvements are simple, but effective.
3. Computational complexity analysis in Section 3.5 is a nice addition.
4. Experiments are overall convincing.
5. The paper is easy to follow and presented well.
Weaknesses: While I enjoyed reading the paper, the paper needs to address the following weaknesses:
1. The paper is missing related works [A, B], which seem to be doing exactly the same as DAGNN by proposing what they call GatedGNN. Therefore, citing DAGNN but not citing A/B can be misleading for readers.
2. Papers [C, D] are also related and it would be nice to discuss them as well. Given their recent appearance it's not necessary to empirically compare to them (although it would make the paper stronger), but they should be discussed. In particular, [C] is solving the same ogbg-code2 task, but achieves much better performance, can the authors discuss why? [D] is proposing a Transformer-based [40] graph model for DAGs and also add node depth-based PE embeddings similar to those in this submission + degree-based embeddings based on [40] (which could further improve the results in this submission). Given these papers, L122: "to the best of our knowledge, transformers have not been studied particularly in the context of DAGs" should be rephrased.
3. The DAGRA component of the method does not seem to leverage two important sources of information: (1) the direction of edges -- node predecessors and successors are treated equally based on L163-164; (2) the shortest path distance between nodes -- the mask is equal to 0 or −∞ in L203. Even though the DAGPE embeddings can help to recover this information, leveraging (1) and (2) seems very natural and could improve results by better distinguishing certain DAG structures. In related papers using Transformers for undirected/directed graphs, (1) and (2) are leveraged. For example, in [C] the forward and backward edges are treated separately, and in [40], [C], the shortest path distance is used in masking.
3. Section 3.3's conclusion is not very clear. Is this section trying to say that all nodes will eventually communicate with each other if enough layers and/or large k are used? Please elaborate/rephrase.
4. Results on ogbg-code2 in Table 6 are a bit confusing, because at first it looks like DAG+TF/SAT are using DAGPE, but from the text it sounds like DAGPE is not used. I think it would be more clear to report both results, with DAGPE and without it, even if the latter is better so that the Table does not have blank entries.
**Minor issues:**
- "partial order" - wouldn't it be more appropriate to use "topological order" that is often used in DAG context (e.g. https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.dag.topological_sort.html and [A, B])? "partial order" seems to be more relevant to sets.
- In Tables, bolding results per pair makes sense, but baseline's top performance should be highlighted too. Also, the overall top performance across all rows can be underlined (or highlighted in some other way).
- In Fig. 3, it's hard to conclude that DAG+Nodeformer separate classes better, so I don't think this visualization is helpful in the main text.
- All equations should be numbered so that reviewers and readers can refer to them easily.
- L130: "in more important fields than" - it's a quite subjective statement
- L154: "[a] given node’s predecessors"
- L169: "regular transformers (e.g., Eq. 2)" - Eq. 1?
- L168: "still very different" - please elaborate how is it different given that k=∞ is the best
*References:*
- [A] Graph HyperNetworks for Neural Architecture Search. ICLR 2019.
- [B] Parameter Prediction for Unseen Deep Architectures. NeurIPS 2021.
- [C] Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models? ICML 2023.
- [D] Transformers Meet Directed Graphs. ICML 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses above.
I'm looking forward to the author's response and will be willing to update my score.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed, which is appreciated. L344 says that "we found only a limited number of DAG datasets", so one suggestion would be to use a dataset of DAGs introduced in [B].
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for the very detailed feedback!**
Q4 raises an important point which we address in the global reply. We also added some of the discussion below to the paper since it covers interesting aspects. For results on additional datasets, please see the reply to WzDZ, Q2. We sincerely thank you for giving us the opptunity to improve the score and are happy to answer further questions if needed.
**Q1 Related Works**
Thank you for sharing these related works. We added discussion about them to the paper.
**Q2 Recent Related Works**
We have noticed a potential mix-up in the citations of [C] and [D] since "[C] Can We Scale Transformers ..." does not address the ogbg-code2 task. We found [D] on arxiv in spring and checked out the code, yet:
- They did not use the data as it is suggested by the OGB benchmark but adapted the graph construction (e.g., making it more succinct by removing redundancy). This likely also addresses the question why they reach such good performance in parts.
- They did not employ the OGB evaluator.
- They do not provide the new datasets they constructed on GitHub, making a proper comparison with their work challenging.
For these reasons, we refrained from directly comparing to [D] since this seems not straightforward and beyond the focus of our study.
We updated L122 and also the related works section.
**Q3 On Incorporating Graph Structure Differently**
We intended to design our framework in the most simple and efficient way, in order to ease usability, and therefore decided to focus on the main characteristics of DAGs.
While there may be features that can improve our framework in certain scenarios, we will likely never obtain an exhaustive model; several other graph features have been shown powerful (e.g., sibling distances [1]) and others will be shown useful in the future. Nevertheless, when applying our model to [40], we consider the shortest path distance, and applying it to [C], we can take into account the direction of edges. And of course, we can combine them if needed in a specific context.
That said, it is an interesting question, in how far such features actually increase performance, since they are implicitely covered by the PE. The below results over ogbg-code2 and self-citation seem to suggest that our simple DAG PE is surprisingly strong, as the impact of (1) edge directionality and (2) shortest path distances is limited here.
| | ogbg-code2 | | self-citation | |
| ----------------------- | ----------------------- | ----------------------- | --------------------- | --------------------- |
| | Valid F1 score | Test F1 score | AP | ROC-AUC |
| DAG+transformer | **0.1739 $\pm$ 0.0013** | **0.1879 $\pm$ 0.0015** | **0.638 $\pm$ 0.008** | **0.822 $\pm$ 0.005** |
| DAG+transformer+(1) | 0.1751 $\pm$ 0.0018 | 0.1870 $\pm$ 0.0021 | 0.636 $\pm$ 0.015 | 0.817 $\pm$ 0.005 |
| DAG+transformer+(2) | 0.1749 $\pm$ 0.0011 | 0.1881 $\pm$ 0.0017 | 0.639 $\pm$ 0.006 | 0.823 $\pm$ 0.004 |
| DAG+transformer+(1)+(2) | 0.1750 $\pm$ 0.0017 | 0.1884 $\pm$ 0.0012 | 0.637 $\pm$ 0.005 | 0.821 $\pm$ 0.004 |
| DAG+SAT | **0.1846 $\pm$ 0.0010** | **0.2018 $\pm$ 0.0021** | **0.627 $\pm$ 0.015** | **0.806 $\pm$ 0.007** |
| DAG+SAT+(1) | 0.1839 $\pm$ 0.0014 | 0.1978 $\pm$ 0.0028 | 0.623 $\pm$ 0.013 | 0.804 $\pm$ 0.007 |
| DAG+SAT+(2) | 0.1851 $\pm$ 0.0008 | 0.1991 $\pm$ 0.0018 | 0.627 $\pm$ 0.016 | 0.810 $\pm$ 0.006 |
| DAG+SAT+(1)+(2) | 0.1852 $\pm$ 0.0013 | 0.1986 $\pm$ 0.0019 | 0.628 $\pm$ 0.007 | 0.811 $\pm$ 0.005 |
**Q5 Additional ogbg-code2 Results**
This is a fair point, we revised the table as follows.
| | Valid F1 score | Test F1 score |
| --------- | ------------------- | ------------------- |
| DAG+TF | 0.1731 $\pm$ 0.0014 | 0.1895 $\pm$ 0.0014 |
| (-) DAGPE | 0.1739 $\pm$ 0.0013 | 0.1879 $\pm$ 0.0015 |
| DAG+SAT | 0.1821 $\pm$ 0.0013 | 0.1982 $\pm$ 0.0010 |
| (-) DAGPE | 0.1846 $\pm$ 0.0010 | 0.2018 $\pm$ 0.0021 |
**Q6 Minor Issues**
Thank you for checking on that level of detail! We updated the paper accordingly.
Regarding $k=\infty$, note that this setting still strongly decreases the complexity compared to regular transformers, since only the reachable nodes are considered for attention. However, given the special DAG structure, this likely does not decrease the effective attention. A detailed discussion on the power of our attention is given in the global reply.
----------------------
[1] Zügner et al. “Language-Agnostic Representation Learning of Source Code from Structure and Context.” ICLR 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. It clarifies some of my concerns, therefore I increased the score from 5 to 6.
---
Rebuttal 2:
Title: Thank you for getting back to us!
Comment: We highly appreciate the careful and positive evaluation.
In case there is anything we can do to address the remaining concerns, please let us know! | Rebuttal 1:
Rebuttal: **We thank all reviewers for the very fair, detailed, and constructive feedback!**
We are sorry for the unnecessary confusion caused by missing **explanation about the power of our DAG attention** and hope that the below details clarify our contribution. Since the scores are borderline overall, we would appreciate if the reviewers consider that. We are happy to provide additional information if needed.
We also provide a **detailed comparison to a very related paper we missed, but which overall highlights the potential impact** of our work.
------------
**G1 Power of DAG Attention**
Technically, we restrict the attention to reachable nodes and, in this way, obtain considerable efficiency gains. Yet, our architecture is tailored to the special DAG structure, and we can show that this design offers similar expressivity to regular transformers.
It is important to note that in our framework *all* nodes directly communicate with at least one source node (i.e., node without predecessors) by the DAG structure. This is specifically the case because we set $k=\infty$. Hence, *2 layers are always enough to establish communication beyond any two nodes that have a common source node*. Furthermore, especially DAG classification datasets usually contain DAGs with a single source node (e.g., ogbg-code2 and NA).
For *DAGs with $m$ source nodes we need $2m$ layers for full communication*, if we assume the DAG to be connected. In the latter case, every pair of source nodes has a common successor through which communication can happen. Further, connectedness is a reasonable assumption, otherwise communication is likely not needed in most scenarios. Our empirical results demonstrate that our design likely does not limit expressivity on many, also large datasets.
The source nodes may seem to represent a certain kind of bottlenecks since, essentially, our architecture's bias emphasizes DAG relationships while re-directing the remaining relationships in the regular Transformer's full attention matrix. But Sec. 3.3 shows that the *importance of relationships we model is in line with well-known random walk theory*, and our qualitative performance increases also demonstrate that putting more emphasis on DAG relationships can be beneficial in a variety of use cases.
---------------
**G2 Comparison to PACE [1]**
As two reviewers noted, we missed a very related transformer architecture tailored to DAGs, PACE, published at ICML 2022. Interestingly, the comparison serves well to highlight our contribution:
- Most importantly, *PACE is one specific model* whereas we propose a framework which can be flexibly used with any transformer.
- Specifically, *PACE uses a more complex PE* and its attention is *based only on the directed transitive closure*. In contrast, we use reachability, which also accounts for data where reverse relationships are critical.
- PACE injectively maps DAGs to sequences of node embeddings and then processes them as a sequence in a masked transformer. The mask restricts the attention, with an intention similar to our restricted attention. Yet, *the model still has the usual quadratic complexity* of transformers, while we propose an implementation based on message passing GNNs that exploits our novel attention to reduce complexity.
- Further, PACE is limited in applicability, as the masked transformer *fails to consider edge attributes* of graphs, while our framework preserves all graph information.
- Our framework's simple design also seems to be more effective empirically: *even in combination with vanilla Transformer, we obtain better results*. Given the simplicity of our model, we especially see considerable efficiency gains.
| | ogbg-code2 | | | self-citation | | Cora | Citeseer | Pubmed |
| --------------- | ---------------- | ---------------- | ----------- | ---------------- | ----------------- | ------------------ | ------------------- | ------------------ |
| | Valid F1 score | Test F1 score | Time(epoch) | AP | ROC-AUC | Accuracy | Accuracy | Accuracy |
| PACE | 16.3$\pm$0.3 | 17.8$\pm$0.2 | 2410s | 52.1$\pm$1.8 | 75.9$\pm$0.7 | 79.47$\pm$0.63 | 73.65$\pm$1.23 | OOM |
| DAG+Transformer | **17.4$\pm$0.1** | **18.8$\pm$0.2** | **591s** | **63.8$\pm$0.8** | **82.2$\pm$75.9** | **87.80$\pm$0.53** | **74.42$\pm$0.22** | **89.01$\pm$0.13** |
[1] PACE: A parallelizable computation encoder for directed acyclic graphs, ICML 2022.
------------------------------
**G3 Novelty and Contribution**
We hope that the above details resolve the doubts about our contributions.
- We propose a simple, easy to use architecture, which tailors any transformer to DAGs.
- It is proven effective over a wide range of well-known benchmarks.
- In particular, it makes various baseline transformers we tested competitive to or outperform proven neural networks which were designed for DAGs.
- The qualitative performance is complemented by considerable gains in efficiency, which is impressive for transformers in general but also in comparison to existing DAG architectures, such as DAGNN. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Incentivizing Honesty among Competitors in Collaborative Learning and Optimization | Accept (poster) | Summary: The paper presents an incentive mechanism to encourage honest data reporting in the presence of spiteful behavior aiming to harm other participants.
Strengths:
The paper considers an interesting an novel setting.
It shows that incentive schemes can in principle induce cooperative behavior.
The incentive schemes show both budget-balance and individual rationality (ex ante).
Weaknesses: While the reward scheme in the paper has truthful reporting as an equilibrium, it is well-known that peer-prediction schemes also admit uninformative equilibria; for example in this case all participants could report the same data without any penalty.
*** This was my most important worry and the authors have addressed this weakness in their rebuttal ****
The schemes requires that the participants observe IID data, which is usually not the case in federated/distributed learning.
*** This still remains to be improved ****
Only particular attack and defense strategies are considered.
*** The authors have convinced me that they have gone far enough at least for this paper. ***
There is no consideration of data privacy.
*** This remains future work. ***
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
How can you scale the penalties without knowing the utilities of the model to the participants?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I think the fact that utilities have to be known needs to be stated more clearly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. We are glad you find our setting novel and interesting. We aim to address your concerns in the following:
**While the reward scheme in the paper has truthful reporting as an equilibrium, it is well-known that peer-prediction schemes also admit uninformative equilibria; for example in this case all participants could report the same data without any penalty.**
Thank you for bringing this up. We agree that equilibria beyond the honest one are usually an issue in peer prediction. However, as discussed in the general response as well, we believe that these are not as problematic in our case. First, as in the general peer prediction setting, honesty offers a natural Schelling point and other equilibria are substantially harder to coordinate on. Second, as laid out in our general response, any non-honest equilibrium will have at least two players with $b^i \neq 0$, such that all players receive a larger MSE than at the honest equilibrium. Unlike in peer prediction, it is therefore unclear why players would coordinate on a non-honest equilibrium in our setting for the most natural parametrization, $\lambda_i>1$, where players prioritize more the quality of their own model than damaging others’ models.
We will update our manuscript to better reflect this important point.
**The schemes require that the participants observe IID data, which is usually not the case in federated/distributed learning.**
We disagree that we cover IID data only. The general formulation in Section 3 allows for dependence between the samples (line 107) and we explicitly model heterogeneity in the mean estimation case (line 169).
While our SGD theorems currently assume IID data, we believe that they could be extended to the non-IID case in future work. In particular, the FeMNIST dataset contains heterogeneous data, and our experiments demonstrate the efficacy of our mechanism for SGD on that dataset.
**Only particular attack and defense strategies are considered.**
We would like to highlight that the attack strategies we consider in the mean estimation setting are very general: As discussed in line 200 of the paper and our general response, equation 3 essentially parameterizes arbitrary attacks. Correspondingly equation 4, which prevents unrealistic strategies like always sending the true mean $\mu$ despite only having access to samples, is the only restriction placed on the strategy space for mean estimation.
While the strategy spaces are clearly more restricted in our SGD setting, we would like to point out that intertemporal dependencies make the analysis of that case highly nontrivial, even with our restrictions.
We did not consider arbitrary defense strategies for two reasons: First, our mechanisms already fully incentivize honesty, such that there is no need for further defenses. Second, as defense strategies can be viewed as statistical inference procedures, analyzing arbitrary defense strategies would require a fundamental breakthrough in statistics: In particular, due to Stein’s paradox, the optimal defense strategy is currently not known for $d\geq 3$ dimensions, even in a single player setting with data distributions restricted to isotropic gaussians.
**There is no consideration of data privacy.**
We first note that the considered mechanisms all use gradient information only, so in this sense our approach is equally private/non-private as classic Federated Learning.
Additionally, privacy concerns are reflected in the considered attack model: Indeed adding noise is a standard way of increasing the Differential Privacy of an algorithm and is often used in practice in Federated Learning, to increase users’ privacy. In that sense, our mechanism could be interpreted as a way to balance players’ privacy concerns with the degradation of the learnt model that is caused by actions taken to ensure privacy.
Beyond that we indeed do not explicitly model privacy into the objectives, as it is an orthogonal aspect compared to competing incentives, which is the focus of this work.
**How can you scale the penalties without knowing the utilities of the model to the participants?**
It is generally impossible to use any penalty-based incentive scheme without some knowledge about the functional form of players’ utility functions, as rescaling the utility functions by a factor of $K$ would also increase the smallest penalty factor that achieves honesty by $K$.
That said, a key advantage of our mechanism is that we only require limited information about players’ utilities ($\lambda_i$ in the Mean Estimation case and the smoothness, Lipschitz and convexity parameters in the SGD case) to correctly scale the penalty. In particular, we do not require any information about the true values of the unknown parameters $\mu$ or $\theta^*$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses.
Regarding multiple equilibria, I like the point that participants would have to coordinate and use side payments to compensate the losers. An even better result would be if you could show that the sum of the rewards, or at least the expectation of that sum, could not increase - then indeed any collusion would not be stable and it would greatly strengthen your paper.
I agree that there has to be some restriction on the strategy spaces. If space permits, it would be useful to have some discussion on what strategies are allowed and what strategies are not considered.
With regards to point 4., it would be good to explicitly point out in the paper what knowledge of participant utilities is required, as this is an important limitation of the work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your timely comment! We are glad that our points were well-received.
Thank you for the detailed suggestion regarding the result on multiple equilibria. What you suggested indeed holds (in expectation) as long as all players are more concerned about their own model’s quality than about others’ ($\lambda_i>1$).
To demonstrate why, please note that the (unpenalized) expected reward for player $i$ equals $\frac{\sum_{j \neq i}||\theta^j - \mu||^2}{N-1} - \lambda_i ||\theta^i - \mu||^2$, such that the sum of all players’ rewards equals $\sum_j (1-\lambda_j) ||\theta^j - \mu||^2$. This is monotonously decreasing in all players’ MSE’s $||\theta^i - \mu||^2$ and, as discussed in our initial response, these MSEs will always be larger for all players at any non-honest equilibrium. Therefore, the sum of expected (unpenalized) rewards of all players is smaller at any non-honest equilibrium than at the honest one. Meanwhile, as discussed in the paper, the penalties paid by all players add up to zero in expectation. Therefore, the result also holds for the sum of penalized rewards.
We will add this result to our paper and highlight its importance for equilibrium selection. We will also add an additional discussion on the strategy spaces, and aim to better highlight the knowledge required about players’ utility functions as a limitation.
We hope these improvements address the reviewer’s original concerns and are happy to answer any further questions.
---
Reply to Comment 1.1.2:
Comment: Thank you again for your response. Since the discussion period is progressing, we wanted to check in whether the result from our last response successfully addresses your concern about multiple equilibria and affects your overall paper evaluation? We would be happy to provide more clarifications if needed. | Summary: The paper considers a federated learning setting with strategic data resources. The authors assume that the entities taking part in the learning process are selfish players incentivized to get the best model but benefit if their competitors receive inaccurate models. This selfish behavior pushes players to lie to the central learning mechanism in their reports.
The authors consider two cases: Mean estimation and a multi-round SGD on strongly convex objectives. They model the players' strategies as multiplicative/additive factors that could be added to the players' actual local computations. The authors show that even in the straightforward case of mean estimation, a PNE does not exist. They offer two remedies: Monetary payments (via peer-prediction techniques) and punishments (noisy model updates by the central mechanism). Then, they show that a PNE exists and characterize the form of payments/punishments required.
Finally, they conduct an experimental analysis demonstrating that their remedies positively affect the learning procedure.
Strengths: 1. The paper deals with a practical issue that is somewhat under-explored.
2. Despite the abundance of notations, the authors have done an excellent job in making the paper read smoothly.
Weaknesses: 1. The paper adopts a game theoretic approach, but many modeling assumptions seem cumbersome and unjustified (see questions below).
2. It is hard to assess this paper's technical contribution. Particularly, the novelty of the peer prediction-based mechanisms are well-studied ideas. The authors did not explain whether this paper adopts these ideas in a plug-and-play manner or presents new non-trivial derivations. The "our contribution" part addresses the paper's content but not its marginal contribution to the line of research, making the technical contribution hard to assess.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Strategy spaces: The assumption that players report their updates along with $\alpha^i \xi^i$, where $\alpha^i$ is player $ i$'s strategy and $\xi^i$ is a random variable seems completely arbitrary and unjustified. Typically, one makes assumptions about what players aim to do (e.g., maximize their payoff) and what they can do (e.g., bounded computation or memory, acting myopically, etc.). Explicit assumptions about players' strategies without proper grounding and reasoning are cumbersome and inconvenient. Could the authors justify what real-world scenarios the strategy spaces in Eq. (3) models? Could the authors present assumptions about player rationality\behavior that would recover their modeling?
2. The assumption about $b^i$ in Eq. (4): What is the justification for this? Indeed, it facilitates the analysis but seems entirely out of context.
3. Multi-round: The authors assume that in the multi-round case, players pick their strategies only once initially. What is the rationale for this modeling? Allowing players to change their strategies throughout the execution will be harder to analyze, but this compromise does not make much sense.
4. Corollary 2: The authors show that a pure Nash equilibrium does not exist, concluding that "without modifications to the protocol, no player can benefit from collaborative learning." While this might be true, I do not see how the inexistence of Nash equilibrium implies collaboration is useless. To reach this conclusion, the authors must show that players are better off (i.e., their estimates are more accurate) without the protocol. Where is this analysis located in the paper? Further, I suspect that a mixed Nash equilibrium does exist, so arguing about whether players can benefit from collaboration should at least consider their payoff under some form of a solution concept (be it mixed equilibrium, sink equilibrium, or otherwise). My question: Could the authors describe why the inexistence of PNE suggests that collaboration is useless?
5. The methods the authors adopt, e.g., scoring rules, seem to treat the most general case where players can report whatever they want, beyond limiting the structure of their message (Eq. (3) for the mean estimation case). This is at least true in the mean estimation case. The same thing applies to the payment case. What does this paper benefit from making the limiting (and, as I argued before, the highly unjustified) assumption of the structured strategy spaces?
6. What is the technical modification of peer-prediction\noise communication required for this paper? How novel is the derivation needed for this paper, and how does it differ from previous papers? Answers to this question could facilitate the assessment of this paper's contribution.
Minor:
• Why are the super scripts m and w needed in lines 198 and 211 (they also appear later in the paper)?
• 173: clients->players
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive and thoughtful review. We are glad you found our paper smooth to read. We aim to answer your questions in the following:
**I suspect that a mixed Nash equilibrium does exist. Also, what does the existence Nash equilibria have to do with benefits from collaboration?**
Thank you for pointing this out. Our result holds for mixed Nash equilibria with one important caveat: As explained in line 223, the players’ expected reward is monotonous in $\mathbb{E}(\alpha^j(x^j)^2)$ (with the expectation taken over both the sample $x^j$ and a player’s random strategy choice for mixed strategies) as long as any other player uses a fixed $\beta^i<1$. As $\beta^i$ and $\alpha^j$ are independent, this extends to random choices of $\beta^i$ with $P(\beta^i<1)>0$. But the optimal $\beta^i$ never equals one for finite attacks, so no strategy profile with finite rewards can be stable. However, your comment made us carefully revisit the corollary, and we noticed that we do not explicitly rule out infinite values for $\mathbb{E}(\alpha^j(x^j)^2)$ or $\mathbb{E}(||b^j(x^j)||^2)$. For these infinite strategies, our monotonicity-based argument fails.
The fact that the only equilibria are “at infinity” also explains why collaboration does not help: If one player uses an “infinite” strategy, all other players’ optimal $\beta^i$ equals one, such that they completely ignore others’ data, like in the non-collaborative case.
We apologize for the confusion and will update our paper to better highlight our previously tacit assumption on the finiteness of strategies.
**What do the strategy spaces in Eq. (3) model? What is the justification for assumption (4)?**
As we discuss in more detail in line 200 of the paper and the general response, Equation 3 without the restrictions posed by Equation 4 essentially expresses the most general attack strategy possible. This is because any possible modification to the mean can be decomposed into a deterministic shift and adding zero-mean noise.
Equation 4 prevents “non-general” strategies whose success depends on the precise value of the parameter $\mu$ to be estimated. In particular, without it strategies that cannot realistically be implemented without knowing the true parameter $\mu$, like always sending the true mean $\mu$, would be admissible. Behaviourally, the assumption can be interpreted as “players do not base their strategies on guesses about the real parameter $\mu$ that go beyond information obtained from their sample $x^i$."
While we do agree that Equation 4 also prevents some strategies without this issue, we currently do not know of a weaker assumption that excludes unrealistic strategies while retaining the same mathematical simplicity.
We will update our manuscript to make this point more clear.
**Why is the strategy space in the multi-round setting restricted to predetermined strategies?**
Our attack structure for SGD is inspired by data-hiding (which increases the variance of the gradient estimates) and Differential Privacy defenses (which add zero-mean noise to the gradient in order to increase the privacy of local data). We opted for non-adaptive strategies, as adaptive strategies will lead to complex dependencies between consecutive SGD steps. In particular, this would make our already quite involved analysis of the SGD case even more difficult from a purely optimization perspective, since arbitrary dependencies between rounds are highly non-standard in usual gradient-based optimization proofs.
**Why are assumptions made on the strategy spaces?**
We believe that rigorous analysis of a (relatively broad class of) special cases is essential for progress whenever analyzing the most general case is not tractable. In particular, if we considered more general defenses in the mean estimation setting, even solving a single player version of our game would essentially amount to the classic statistical problem of finding an admissible estimator. Unfortunately, we are unaware of such results for $d \geq 3$ dimensions in the literature, due to Stein’s paradox. Please also refer to our general response for further discussion on these matters.
**What is the technical modification of peer-prediction required for this paper? How novel is the derivation and how does it differ from previous papers?**
To the best of our knowledge, our use of noise rather than explicit payments to implement a peer prediction mechanism (5.2) is completely novel. We achieve this using a new reduction of the noise-based case to our payment-based result.
Similarly, we believe to be the first to use a peer prediction mechanism in a multi-round optimization scheme like SGD. Unlike in prior work, it is not possible to base the mechanism on the final output $\theta$, as this is a function of all players’ strategies over multiple time steps. This makes it impossible to apply standard arguments that relate the quality of a player’s estimate and how well their estimate predicts other players’ estimates.
Instead, we employ a novel recursive bound for the squared norm of differences in SGD-iterates between a clean trajectory and a trajectory with time-varying gradient noise, that has to take into account ripple effects of noise added during early time steps. This allows us to bound the effect a player that uses a particular strategy can have on the final SGD iterate and thus the loss.
Our first result (5.1) is closest to the existing literature, but there are still important differences: a) In peer prediction, the goal is to incentivize effort to produce good estimates, while our setting focuses on penalizing malicious manipulations. b) As far as we know, our redistribution scheme used to achieve zero expected payment for honest players has not previously been analyzed c) we consider estimates for n-dimensional vectors and arbitrary distributions.
**What do superscripts m and w mean?**
Thank you for pointing this out. We have removed these superscripts.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply. I will take it into account during further discussion.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, thank for your response! Please do let us know about any remaining questions or concerns, so that we can address them until the end of the author-reviewer discussion period. | Summary: The paper studies a centralized collaborative learning problem. Authors provide theoretical guarantees for an attack method and a defense method. Further, the paper proposes two mechanisms to incentivize honesty: a method that uses an explicit side payment method and requires transferable utility, a centralized punishment mechanism where a central server adds noise to the estimates it sends to players that have sent suspicious updates. Simulation results are provided supporting the claims.
Strengths: -- The paper is well written and easy to follow.
-- Authors provide a description of related work and background. The problem formulation considered in the paper is well positioned in the relevant literature.
-- Authors provide several theoretical results making novel technical contributions. Authors provide discussions around the implications of the theoretical results.
-- The paper provides simulation results supporting the theoretical claims.
Weaknesses: -- Problem formulation is well positioned in the relevant literature. However, authors do not provide a discussion on how their results and methods compare to existing literature.
-- Authors provide numerical simulation results supporting their analysis. However, authors fail to compare their method with existing methods in the simulations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -- I encourage authors provide a comparison with existing methods.
-- Is it possible to add a proof sketch in the main paper highlighting the technical challenges addressed in the analysis?
-- Can these results be extended to the decentralized setting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors do not provide a discussion on limitations of their method. Authors include a discussion on societal impacts in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive and thoughtful review. We are glad you found our paper easy to follow and our contributions novel. We aim to answer your questions and address your concerns in the following:
**Can these results be extended to the decentralized setting?**
In a decentralized setting where players publicly communicate their estimates to all other players, our payment-based mechanism could be implemented without a server, as the payment is a simple function of all players’ communicated estimates, so everyone’s payment can be computed publicly. However, this does not work for our noise-based mechanism, as players can just personally aggregate the public clean estimates when there is no payment-based penalty.
**There is no discussion on how the methods/results compare to the existing literature. Is it possible to add a proof sketch in the main paper highlighting the technical challenges addressed in the analysis?**
We appreciate this suggestion and plan to add further details on the key technical challenges in future versions of the paper with an extended page limit.
As a brief summary, the first key technical insight is that ideas from peer prediction can be applied to our novel setting of competition in collaborative learning. With this insight, Theorem 5.1 can be derived with versions of techniques used in previous work on peer prediction, generalized to treat the d-dimensional case without strong distributional assumptions.
Next, as far as we know, our noise-based mechanism is entirely novel and the analysis required a nontrivial reduction of that mechanism to the payment-based case. A key challenge in the analysis is that the magnitude of the added noise is correlated with a players’ sampling error, such that standard independence-based decompositions of the squared loss do not work.
Lastly, our treatment of peer prediction for SGD is also entirely novel to the best of our knowledge. In this setting, it is not possible to base the mechanism on the final output as is usually done in peer prediction. Instead, our analysis bounds the effect a player can have on the final loss by manipulating an update at time t, which can be highly complicated due to ripple effects, using a novel recursive bound for the squared norm of differences in SGD-iterates between a clean trajectory and a trajectory with time-varying gradient noise. This allows us to connect the expected deviations between a player’s gradient and other gradients to the overall “damage” they cause, and bound that damage by penalizing gradient deviations.
**There is no comparison to existing methods in the experiments.**
Thank you for the suggestion. We would like to include comparisons to baseline methods from existing work, but since the formal setting we consider is entirely novel, there are no established methods to fairly compare to. While there are existing methods that aim to make collaborative learning robust to updates from dishonest players, our methods are orthogonal as they instead aim to prevent dishonesty in the first place.
In order to demonstrate how our work complements existing robust collaborative learning methods, we provide an additional experiment using one such method: robust stochastic gradient descent using the median rather than the mean of players’ updates (Yin et al., ICML 2018). Please refer to the PDF response for the results. We present two plots, to demonstrate how the median compares to the standard mean-based aggregation in the presence of noise (Figure 4); and how our mechanism performs in combination with the median-based aggregation (Figure 3). As can be seen from the first plot, while this median method increases robustness to noise, it still performs worse the more noise is added, such that incentivizing honesty remains important. From the second plot we see that, similarly to the experiment in the main text, the players are incentivized to not send noise as long as the constant C, that controls the strength of our penalties, is sufficiently large. Therefore, our mechanisms are effective in preventing attacks that would otherwise hurt the performance of the players models, even in cases when a robust FL defense is used.
Yin, Dong, et al. "Byzantine-robust distributed learning: Towards optimal statistical rates". In: International Conference on Machine Learning (ICML), 2018. | Summary: The authors investigate the issue of manipulation (in the form of falsifying data or model updates) among agents who mutually contribute to a shared model. Incentives for such behaviors arise when agents possess differing objectives with respect to the shared model. The authors first demonstrate that without external intervention, these incentives are essentially unavoidable. However, the authors propose two methods for inducing incentive compatibility in such settings; namely payments when utility is transferable, and noisy server messages when utility is non-transferable. The authors derive these mechanisms and provide additional theoretical results for two settings of collaborative online learning, single shot mean estimation and multi shot shared gradient updates. Lastly the authors provide experimental results on the FeMNIST data set demonstrating that their mechanisms dissuade strategic behavior.
Strengths: 1. Distributed learning is a rapidly growing area and there is a real danger that strategic agents could disrupt the efficacy of these systems if their incentives are not properly accounted for. As such, the authors’ work is well motivated and helps to fill an important piece which is currently missing from the literature.
2. Manipulations in these types of settings are often framed as being adversarial. The authors model agents as being strategic rather than purely adversarial. As we have seen in areas like supervised learning the differences between adversarial and strategic agents can be highly consequential in terms of designing robust systems; considering both types of behavior is imperative (the former is already covered in prior work).
3. The need for such mechanisms is well motivated both from a narrative perspective and from a theoretical perspective (Corollary 4.2).
- Considering the case of non-transferable utility increases the applicability of the authors results, and the case of transferable utility provides the system more options to incentivize collaboration when payments are feasible.
4. The authors’ results are constructive, rather than simply existential. For example, rather than saying that there exists a $C$ and $\lambda$ such that players participate honestly, the authors provide specific ranges of these variables for which honest behavior is an equilibrium. This makes it easier for others to implement their methods in real-world scenarios.
5. While some strong assumptions are made for the theoretical results, such as convexity, the inclusion of the experimental results helps demonstrate that the authors’ approach is effective even when such assumptions do not hold.
6. The paper is well written and the authors take the time to outline the intuition and implications of their results.
Weaknesses: 1. The mechanisms proposed only induce truthfulness as a Nash Eq, implying that other non-truthful equilibria exist. I understand that these types of results are standard throughout the literature, but when deploying these mechanisms in practice it is important to note that we have no guarantee which equilibrium agents will end up in. This is a far weaker result than truthfulness being a dominant strategy.
2. Similar to the last point, the mechanisms do not appear to be collusion proof. In particular one player could pretend to represent multiple clients (i.e., sending multiple updates each round). For example, in the case of single-shot mean estimation, if such an agent monopolizes a sufficiently large position of the data being submitted, they could force the other agents into submitting any desired value for large enough $C$ (since the deviation penalty will outweigh the other parts of their utility). If agents are willing to misreport data, they are probably also willing to collude. Not accounting for this possibility limits the scope of the work. With that said, the authors appear to be the first work to study robustness to strategic behavior in distributed systems, so perhaps asking for additional results on collusion is too much. However, this should be more clearly stated as a limitation of the work.
3. The experiments are somewhat limited. The primary contribution of the paper is theoretical and the point of these experiments is to show that the payment scheme works even when convexity does not hold, but this observation on a single dataset, for a single model, is a bit unconvincing. In particular, I would expect that the average reward received when increasing $\alpha_A$ would decrease more rapidly for larger $C$, however, it is not clear to me that the small amount of fines paid by honest players would hold across different scenarios.
### Comments and minor issues: no impact on score and need not be addressed in the author response.
1. Line 259: should this say “... at the honest equilibrium [when] ….”? In the supplement this is stated as an [and] rather than a [when].
2. Corollary 4.2 should probably be a Theorem. This is actually quite an interesting result and somewhat non-trivial based on the proofs. Although Theorem 4.1 is doing most of the heavy lifting here the corollary is actually the main results, while the Theorem feels more like a helping lemma.
3. Links to theorems and references are broken in the main body. Looks like this is the result of compiling the document with the supplement and then using a PDF editor to trim the supplement pages.
4. Figure 6 in the supplement takes up its own page.
5. The naming convention of Theorems is not consistent with the main body and makes it difficult to find a specific theorem within the supplement, unless using the reference provided in the main body.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive and thoughtful review. We are glad you liked our paper and plan to incorporate your feedback in the next revision. In the following, we aim to address your concerns:
**The mechanisms proposed only induce truthfulness as a Nash Eq, implying that other non-truthful equilibria exist.**
Thank you for pointing out this important issue. We agree that honesty as a dominant strategy would be a more desirable result. As we discuss in more detail in the general response, we do not think that the other existent Nash equilibria would pose a large problem in practice. In particular, they are simultaneously more difficult to coordinate on and make all players’ receive larger MSEs than at the honest equilibrium, such that it is unlikely that players will choose to coordinate on these equilibria when they care more about their own models than others’ ($\lambda_i>1$).
We will update our manuscript to better reflect this important point.
**The mechanisms do not appear to be collusion proof**
Thank you for bringing this up. Collusion in our framework is an important topic that would be interesting to analyze in future work, and we will highlight this limitation in an updated version of the paper.
We do expect our mechanisms to be collusion-proof against small coalitions because an actor that pretends to be multiple actors will also have their penalty multiplied accordingly. However, we do agree that there is a problem once the colluding coalition significantly affects the mean estimate, as the shifted mean would reduce the penalty paid by each member of the coalition.
**There are only experiments for a single dataset and model.**
We reran our experiment on a subset of the LEAF Twitter sentiment analysis benchmark, training a 2-layer classifier on top of frozen BERT embeddings. These results are plotted in the response PDF in Figures 1 and 2. We observe similar trends as in the experiment in the main paper, in particular if $C$ (the scaling on the monetary penalties) is sufficiently large, players are strongly disincentivized from adding large noise.
We found our model in the Twitter benchmark to be more sensitive to gradient noise than the CNN in our FeMNIST experiments, with a noise level of $\alpha_A=5$ degrading the loss by more than twice as much ($0.084$ vs $0.034$) as noise level $\alpha_A=9$ did for the CNN. Correspondingly, the penalties needed to achieve honesty were roughly 10 times larger than on the FeMNIST dataset, which leads to a similar increase of the 98th percentile of payments at the honest equilibrium (from $0.0031$ to $0.0243$) for the largest considered penalty $C = 0.002$. We would like to note that this outlier payment is still only a third of the damage caused to the loss at $\alpha_A=5$, and that a 4 times smaller penalty of $C = 0.0005$ still appears to be sufficient to incentivize honesty.
Lastly, we would like to thank you for the additional feedback. We will incorporate it in the updated version of the paper.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for detailed response.
**Truthful EQ** I find the authors' point regarding the additional coordination required for non-truthful EQs to be convincing. Adding this as a remark would boost the usefulness of the results regarding truthful EQs.
**Collusion** I agree that the mechanism is likely collusion proof when the level of collusion is small (or at least some similar mechanism). While collusion is not the main focus of this paper, providing a result regarding collusion would increase the paper's strength. However, even without such a result, the paper is a clear accept in my opinion.
**Additional Experiments** Thank you for running these. The contrast in required payment between different datasets is quite interesting. There is a point to be made that perhaps in practice finding the the "correct" fine may be tricky as small fines may lead to undesirable strategic behavior, while large fines may disincentivize agents from participating in the mechanism.
**Restricted model** Other reviewers raised issues of unjustified or restrictive assumptions made by the authors. While I agree with some of these criticisms, I do not believe the authors' assumptions to be too restrictive, and find their response to reviewer @hQqh to be satisfactory.
After reading the other reviews and the authors' rebuttal I stand by original recommendation of Accept. This paper is very interesting and the model is highly relevant to the growing field of distributed learning.The authors' paper sets up a solid foundation through which future work can further analyze this problem (especially given its practicality).
---
Reply to Comment 1.1.1:
Title: Thank you for your response!
Comment: Thank you for your timely and detailed response! We appreciate your constructive and positive feedback regarding our paper.
We are also happy that you find our modelling of the strategy spaces appropriate.
We will incorporate your feedback into the next version of the manuscript. In particular, we will add discussions on the non-truthful EQs, how to find the correct penalty scaling and regarding collusion. We will also include and discuss the additional experiments. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable and constructive feedback. We are glad that the reviewers find our setting novel and interesting ($\color{blue}hRji$), well motivated ($\color{lime}seCF$), our technical contributions novel ($\color{red}Mvyk$) and our text smooth to read ($\color{cyan}hQqh$). We now address the most common questions brought up by the reviewers, and look forward to the reviewers’ replies.
**The mechanisms induce honesty as a Nash equilibrium, but there are other equilibria ($\color{lime}seCF$, $\color{blue}hRji$)**
While this is true, we would like to put it in context.
In the peer prediction setting, dishonest equilibria are usually preferable to all players (but the server), but this is not true for most natural instantiations of our setting. In the mean estimation setting, by the proof of theorem 5.1, $\mathbb{E}[b^i(x^i)]$ has to be nonzero for at least two players at non-honest equilibria such that all players receive at least one manipulated update and a worse model than at the fully honest equilibrium. Correspondingly, the sum of all players’ MSEs increases, such that at least some players would prefer the honest equilibrium as long as $\lambda_i>1$. This makes it unlikely for players to take on the difficult coordination task of playing a non-honest equilibrium. In the SGD setting, Theorem 6.1 guarantees that at any equilibrium the losses of the players are close to optimal.
We thank the reviewers for pointing this out. We will add a more detailed discussion of this in the final version.
**The considered strategy spaces are restricted ($\color{cyan}hQqh$, $\color{blue}hRji$)**
As noted in line 200, the attack strategies we consider for mean estimation are quite general: Equation (3) essentially parameterizes the most general attack space in a more interpretable way: Any random variable $m(x)$ that represents a message can be written as $\bar{x} + b(x)+\alpha(x)\xi$ where $b(x) =\mathbb{E}[m(x)|x]-\bar{x}$ and $\mathbb{E}[\xi|x]=0$.
Equation (4) effectively prevents $b^i$ from encoding knowledge about $\mu$. This restriction is needed to exclude “non-general” strategies, whose success depends on the true value of the parameter mu that is to be estimated, such as always sending a constant value $m^i$, so that $b(x)=m^i-\bar{x}^i$.
Our strategy spaces are more restricted in the complex SGD setting, where, however, the attack space has several natural interpretations, such as adding a noise-based differential privacy defense or hiding samples from the empirical estimates. That said, we would like to point out that the use of restricted strategy spaces/classes of estimators is common in both game theory and statistics. In particular, the optimal solution to even a single player version of our Mean Estimation game for an isotropic gaussian distribution is unknown for fully general strategy spaces in $d\geq 3$ dimensions, due to Stein’s paradox.
**Do meaningful mixed Nash equilibrium exist without our mechanisms ($\color{cyan}hQqh$)?**
Indeed, our result already includes mixed equilibria. The confusion might in part have been caused by an imprecision in the corollary statement: As our monotonicity-based argument for the nonexistence of equilibria only works as long as rewards are finite, the corollary should state: “... does not have any (mixed) Nash equilibrium for which $\mathbb{E}(\alpha^j(x^j)^2)$ and $\mathbb{E}(||b^j(x^j)||^2)$ are finite for all players.”
We apologize for the confusion and will update the paper to clarify this point by updating the corollary statement to the version presented above.
For more details, please consider the individual response to reviewer $\color{cyan}hQqh$.
**It is not clear from the manuscript whether our results are a straightforward application of existing peer prediction results or require novel technical methods ($\color{red}Mvyk$,$\color{cyan}hQqh$)**
We thank the reviewers for pointing this out and will aim to highlight the key technical challenges better in future versions of our manuscript. We respond to each reviewer individually and provide a summary below.
To our awareness, our work is the first to explicitly model competitive incentives between clients in collaborative learning. Our first key technical insight is that ideas from peer prediction, which instead focuses on conflicting interests between the server and clients, can be applied to this setting.
To the best of our knowledge, both the noise-based mechanism (Theorem 5.2) and our application of peer prediction to SGD (Theorem 6.1) are entirely novel. The analysis of the former required a nontrivial reduction to the payment-based case, while the analysis of the latter is based on a novel recursion for the squared norm of differences in SGD-iterates between a clean trajectory and a trajectory with time-varying gradient noise.
**Additional experiments ($\color{lime} seCF$,$\color{red} Mvyk$)**
We added further experiments, one using a different dataset and model; and one that compares our schemes to an existing method for robust FL. Please refer to the response PDF and the individual responses to reviewers $\color{lime} seCF$ and $\color{red} Mvyk$ for further details.
**The schemes require that the participants observe IID data ($\color{blue}hRji$)**
We disagree that we only cover IID data – Sections 3,4,5 and 7 explicitly analyze various non-IID settings. We will highlight this better in the next version of the manuscript. We refer to the individual response to Reviewer $\color{blue}hRji$ for more details.
Pdf: /pdf/67785c273ac4e3f8ed1f7d7570fa971b353052b2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
LART: Neural Correspondence Learning with Latent Regularization Transformer for 3D Motion Transfer | Accept (poster) | Summary: 1.The paper presents LART, a 3D Transformer framework for 3D motion transfer. One of the distinctions from previous methods is that LART does not require joint annotation or pre-defined correspondence between the source and target mesh. By preserving motion metrics and effectively controlling synthetic motions in the latent space construction, LART achieves accurate motion synthesis. The experimental results demonstrate the high learning efficiency of LART, requiring only a few samples from the AMASS dataset to generate motions with plausible visual effects.
Strengths: 1. The paper is well-written and presents the information in a clear and comprehensible manner.
2. A novel latent geometric regularization is proposed for synthesizing realistic dynamic results.
3. The proposed method demonstrates its versatility by successfully extending its applicability beyond human motion transfer to animal motion transfer, as illustrated in Fig 5.
4. The method achieves good performance in both quantitative and qualitative evaluations, providing solid evidence of its effectiveness.
Weaknesses: 1. The author should provide more explanation and necessary information about specific terms and blocks used in the paper. For instance, a detailed description of the SPAdaIN block is needed, including its purpose and relevance in the decoder. Additionally, the author should further highlight that SPAdaIN is from [33], not just cite it without mention.
2. The design of the cross-attention mechanism depicted in Fig 2 is not clearly described. More detailed explanations regarding its implementation are required.
3. The proposed encoder is named "geometry adaptive 3D feature encoder," but it seems that geometry is only involved in the positional embedding. It would be beneficial to consider incorporating a geometry-aware design in the encoder, in addition to the positional embedding.
4. The paper lacks an ablation study on the three different positional embedding methods presented in Fig 3. It is important to analyze the performance of each positional embedding method, as the supplementary Table 7 only reports the loss without providing any analysis.
5. While the paper includes visualizations in figures, it would be advantageous to provide a demo video to showcase the actual quality of the visualizations.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your acknowledgment of the novelty of our work and the constructive feedback! We will address your questions and concerns in the following:
**Q1: The author should provide more explanation and necessary information about specific terms and blocks used in the paper. For instance, a detailed description of the SPAdaIN block is needed, including its purpose and relevance in the decoder. Additionally, the author should further highlight that SPAdaIN is from [33], not just cite it without mention.**
A1: We appreciate your valuable suggestion that can help us to illustrate our work for the following researchers better. SPAdaIN was first proposed by Wang et al. in [33] by injecting the original target mesh into each layer of the network with a residual-like connection. In this way, the detailed spatial geometry can be perceived throughout the whole flow of the network. Thus, the network can be geometry-aware and preserve the geometric information of the target meshes (i.e., the shape). Due to the page limitation, and as SPAdaIN is not the contribution we want to emphasize, we put the details of how we design the network (e.g., the choice of the SPAdaIN and attention structure) and the specific dimension of each layer and hyperparameters of the network are attached in the Appendix. As suggested, we will add some explanation of each introduced component in the paper in the new version.
**Q2: The design of the cross-attention mechanism depicted in Fig 2 is not clearly described. More detailed explanations regarding its implementation are required.**
A2: We appreciate your suggestion; the cross-attention block in Fig. 2 is originally demonstrated in the paper with Eq. (1), where q and k are the latent pose codes from the target mesh and driving pose. We will enhance the illustration of Fig. 2 and indicate that the cross-attention block (green on) is based on Equ. (1) with better indications in the new version.
**Q3: The proposed encoder is named "geometry adaptive 3D feature encoder," but it seems that geometry is only involved in the positional embedding. It would be beneficial to consider incorporating a geometry-aware design in the encoder in addition to the positional embedding.**
A3: Thanks for your suggestion. As discussed in Q1. The geometry-aware function is achieved via SPAdaIN in the decoder part of LART, where it takes the raw target mesh as residual inputs to learn the spatial geometry, which is more efficient for the generation. Meanwhile, our encoder focuses more on the pose presentations and the pose correspondence learning between the target mesh and source pose. If we also introduce spatial and geometric aware design in the encoder, the network might be redundant and intricate as the complicated and too detailed geometric (such as wrinkles and tissues) are less necessary for the pose encoding.
**Q4: The paper lacks an ablation study on the three different positional embedding methods presented in Fig 3. It is important to analyze the performance of each positional embedding method, as the supplementary Table 7 only reports the loss without providing any analysis.**
A4: We appreciate that reviewer notice the ablation study of the embedding methods. Firstly, compared to concatenating embedding, adding embedding takes fewer memory allocations and is proven to perform better than (please see ViT [A Dosovitskiy · 2020], PoseFormer [C Zheng · 2021]), thus in this work, we take adding embedding by default. As the reviewer noticed, Table 7 is the ablation study of different embedding methods. However, we find it tricky to conduct a fair comparison with a good protocol. Precisely, our adaptive embedding is proposed specifically for target mesh with different vertex sizes than the motion sequences, while the fixed scheme embedding cannot; thus, we cannot directly compare those two schemes on different target sizes. As a result, we only report the evaluated reconstruction loss (i.e., the PMD used for evaluation) in training, in which the target meshes are all fixed size. This loss is equivalent to the evaluation of unseen motion settings which we use as the ablation study of different embedding schemes. Note that this loss is not fair enough as it only evaluates the model with target meshes with fixed vertex sizes which cannot fully demonstrate the strength of the adaptive embedding. We will clarify this in the paper.
**Q5: While the paper includes visualizations in figures, it would be advantageous to provide a demo video to showcase the actual quality of the visualizations.**
A5: Please see our response to the first concern of reviewer 1. In short, we include additional qualitative videos to showcase the more generating performance of our LART in folder ‘more results’, and provide experimental results with 4D raw scans as input to drive the target meshes in folder ‘4D raw scan input’ as well as noisy driving source in folder ‘noisy driving motion’. Furthermore, we demonstrate that our method can conduct linear operations to the latent space and achieve meaningful manipulation of the motions in folder ‘interpolation’.
The result videos can be found at this anonymous link: https://we.tl/t-WDuWeIFY0K
Please let us know if you have more questions or concerns; thanks!
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I hope the author can revise the paper based on these suggestions in the updated version.
---
Reply to Comment 1.1.1:
Comment: We will revise the current version and enhance the manuscript based on your suggestion. We wish to express our appreciation for your constructive comments and corrections, which have greatly improved the manuscript. | Summary: The paper presents a method to transfer motion from a dynamic input sequence to a static 3D object. There are several novel components presented in the method: a novel feature encoder with an adaptive positional encoding scheme and a novel latent geometric regularization on the transformer. The paper is evaluated using motions from the AMASS dataset and shapes from the DFAUST dataset.
Strengths: The paper is the first to address the problem of retargeting motion from a motion sequence to a novel shape. In this regard, it is quite a novel paper. There are some novel components in the method as well as listed above. I like that the method can also work when the driving motion meshes are noisy.
Weaknesses: The memory requirements of the method is a weakness; this is listed in the main paper itself.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: none
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have mentioned several limitations of their method in the main paper. I have mentioned a few of them above as well. But despite the limitations and weaknesses, I think there is merit in the paper and it deserves acceptance. So I recommend a boderline accept rating.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your acknowledgment of the novelty of our work and the constructive feedback!
**Q1: The memory requirements of the method are a weakness; this is listed in the main paper itself.**
A1: Regarding the concern on the memory requirements, this memory allocation is predominantly attributed to the necessity of incorporating multiple frames as input data. The rationale behind this allocation is to empower the model to capture and comprehend temporal patterns with Latent Metric Regularization effectively. This, in turn, enables the model to capture and synthesize reasonable temporal dynamics (with locally linear in Euclidean space to gain better deformation effects). As discussed in the paper Section 3.1, few efforts have been made for end-to-end 3D motion transfer due to the substantial computational consumption. To our knowledge, it is the first attempt to achieve end-to-end 3D motion transfer with a customized Transformer architecture.
**Q2: The authors have mentioned several limitations of their method in the main paper. I have mentioned a few of them above as well. But despite the limitations and weaknesses, I think there is merit in the paper, and it deserves acceptance. So I recommend a borderline accept rating.**
A2:
We sincerely appreciate your acknowledgment of the merit in our work and your recommendation for a borderline accept rating and we are pleased that you recognize the significance of our work despite the identified limitations.
Please let us know if you have more questions or concerns; thanks!
---
Rebuttal Comment 1.1:
Comment: Thanks for taking the time to reply to my review. I maintain that this paper has merit and I will retain my initial rating for the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for replying to our response and we wish to express our appreciation for your positive comments! | Summary: This paper describes a method to transfer the dynamic mesh sequences to the unseen 3D mesh target. A transformer-based model is developed to implicitly learn the correspondence. In this model, pose and identity embeddings are separately encoded from the meshes. A decoder is designed to generate mesh sequences while considering the target mesh and encoded embeddings. The proposed method, LART, has been mainly evaluated on DFAUST dataset.
Strengths: 1. Transferring 3D mesh motion to the 3D unseen target is very challenging in learning the correspondce between the input and the targetm esh. In this paper, a transformer-based model is trained to implicitly learn such correspondence. The idea is interesting.
2. The paper is well written to clearly state the differences compared with previous methods and the main contributions.
Weaknesses: The main concern is about the experiments.
The task is not claimed as transferring 3D motions between general subjects, not only for human meshes. While the evaluation, especially quantitative evaluation, is only about human. There are only a few qualitative results shown in the Fig. 5 to demonstrate the generalization ability of the proposed method, which may not be convincing enough. As a temporal method, it would be better if a video could be provided to show its visual performance. Therefore, the contributions may be a little over-claimed. Current experiments may not be enough to support them.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I think the most attractive point of the proposed method is its generalization ability.
Is there any other way to quantitatively / qualitatively demonstrate the generalization ability of the proposed method?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your acknowledgment of the challenge of our work and the constructive feedback! We will address your questions and concerns in the following:
**Q: The main concern is about the experiments. I think the most attractive point of the proposed method is its generalization ability. Is there any other way to quantitatively / qualitatively demonstrate the generalization ability of the proposed method?**
A: Please see our response to the first concern of reviewer 1. In short, we include additional qualitative videos to showcase the more generating performance of our LART in folder ‘more results’, including various motions, and providing experimental results with 4D raw scans as input to drive the target meshes in folder ‘4D raw scan input’, as well as noisy driving source in folder ‘noisy driving motion’. Furthermore, we demonstrate that our method can conduct linear operations to the latent space and achieve meaningful manipulation of the motions in the folder ‘blending and interpolation’. All those abilities demonstrate the generalization ability of our LART.
The result videos can be found at this anonymous link: https://we.tl/t-WDuWeIFY0K
Please let us know if you have more questions or concerns, thanks!
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: Dear Reviewer gKzm,
Thank you so much again for your time and efforts in assessing our paper. Hope our rebuttal has addressed your concerns. We are happy to discuss with you further if you still have other concerns before the rebuttal due.
Thanks for helping improve our paper. | Summary: The paper proposes to improve the SOTA of the learned pose/motion transfer on unrigged 3D meshes. The architecture consists of a geometry adaptive feature encoder, a LART decoder, and a latent metric regularizer.
The geometry adaptive feature encoder first extracts features similar to NPT [33] by casting each vertex to a higher dimensional feature, e.g. 1,024 dimensions, through a series of 1D convolutions followed by an instance norm. A typical approach then is to either concatenate or add the feature to the embedding, which limits the model to only work on a fixed number of vertices. Instead, the paper proposed to apply max pooling across all vertices of the extracted feature, add to the embedding, then tile this to an arbitrary number of vertices.
The resulting feature is passed to the LART decoder which is essentially a transformer to attend to the corresponding vertex in the driving mesh and the target mesh. This is unlike 3D-CoreNet [29] where the correspondence is learned explicitly with the optimal transport.
Finally, the model applies the latent metric regularization to encourage the poses in motion to be interpolatable.
Strengths: All proposed components are novel and reasonable to achieve the goal. In particular, the use of the transformer to implicitly learn the 3D geometry correspondence is a great idea.
Weaknesses: No videos are provided, making it hard to qualitatively discuss the method.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Can authors provide videos of various results, as the paper's title is "motion transfer?" How well does the method handle temporal coherency? Can authors share the video of the ablation on the latent metric regularization?
Have authors tried the method on a driving sequence with varying vertex count, e.g. raw 4D scan output? In theory, the method seems to be able to handle this.
Can authors demonstrate the effect of the flattening by applying algebraic operation? Would it make sense to show the results of motion blending and motion in-betweening?
Is there a way to evaluate correspondence learning compared to others like DiffusionNet [Attaiki et al. 2022]?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 4 excellent
Limitations: As the authors infer, the method will not handle geometric and physical constraints like volume preservation and collisions. I understand that such effects are not in the scope of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your acknowledgment of the novelty of our work and the constructive feedback! We address your questions and concerns in the following.
**Q1: Can authors provide videos of various results, as the paper's title is "motion transfer?" How well does the method handle temporal coherency? Can authors share the video of the ablation on the latent metric regularization?**
A1: The result videos can be found at this anonymous link:
https://we.tl/t-WDuWeIFY0K
In the folder ‘SOTA comparison’, we visualize the results of both our model and other STOA methods (3DCoreNet, NPT, LART w/o LMR, LART, and Ground Truth). We summarize the advantages of our LART over others in the following:
(1) Comparison with SOTA methods: with the qualitative results shown in the video, our method performs better than existing SOTA methods in the visual effects, while the quantitative evaluation in the paper also proves it.
(2) Temporal coherency with LMR ablation study: as one can see, although the LART (both with and without LMR) outperforms other SOTA methods in general, the LART without LMR still has observable artifacts such as shrinking arms and body while the LMR can substantially improve the temporal coherence.
**Q2: Have authors tried the method on a driving sequence with varying vertex count, e.g., raw 4D scan output? In theory, the method seems to be able to handle this.**
A2: Thanks for your suggestion. We followed your suggestion and added additional experiments as requested:
Setting: We use the official DFAUST release raw 4D scans, with subject 50027, motion ‘shaking arms’ as input.
We randomly sampled 6,890 points from the raw scans (since the vertex point number of the raw scan is too huge with more than 15,000 points) and conducted the motion transfer with a pre-trained LART to verify its performance both with and without any further finetuning settings. The result videos can be found at the above anonymous link, folder ‘raw scan input’.
As we can see, using a pre-trained model without any not finetuning to transfer motion from raw 4D scans is extremely challenging (existing methods all need domain-specific finetuning to take raw data as inputs, such as MetaAvatar [S Wang 2021, NeurIPS]). Because the domain distribution of watertight meshes in the training and raw scans in the testing are quite different. For instance, the distortion on the hands in the video is caused by the too-sparse point sampling on the hands with the naive random sampling, while the points on watertight meshes are evenly given. Thus, we further finetuning the model on 4D, the raw 4D scan for ten epochs, the visual results get much better, and the domain de. To conclude, the demo shows the potential of our LART to work directly on 4D raw scans with domain-specific finetuning, which could be the future direction.
**Q3: Can authors demonstrate the effect of the flattening by applying algebraic operation? Would it make sense to show the results of motion blending and motion in-between?**
A3: Thanks for your valuable suggestion. Exploring the potential of applying algebraic operations directly on the latent space would be interesting. We demonstrate two examples by 1) taking two frames from the same motion, i.e., interpolating (boxing), and 2) taking two frames from different motions (one_leg_loos and chiken_wings), i.e., blending, as the first and last frame to generate the corresponding mid-frame outputs by varying the alpha in Equ (3). We attach the results in the ‘blending and interpolation’ folder; as we can see, our LART has the potential to generate reasonable ‘in-between’ outcomes: the intermedia pose has both lifting leg and raising arm actions.
**Q4: Is there a way to evaluate correspondence learning compared to others like DiffusionNet [Attaiki et al. 2022]?**
A4: We really appreciate your valuable suggestion to find a way to evaluate correspondence learning and mention the DiffusionNet [Attaiki et al. 2022]. DiffusionNet focuses on the 3D shape-matching problem. Although it’s similar to 3D motion transfer to some extent, it belongs to another research and beyond our research scope. Although aligning the correspondence can be beneficial to the motion/pose transfer, it’s not the ultimate goal in our 3D motion transfer task. One can still achieve robust motion/pose transfer without the need for correspondence alignment. Thus, we think quantitatively evaluating the correspondence learning with rigorous metric evaluation might go too far from the original task and might mislead the research direction. However, we agree with the reviewer that it would be meaningful to quantitatively evaluate and verify the correspondence learning with the LART in different tasks, such as shape matching and deformation, as it shows promising qualitative results, but we will put it as future work.
**Q5: As the authors infer, the method will not handle geometric and physical constraints like volume preservation and collisions. I understand that such effects are not in the scope of this work.**
A5:
We agree with your comment that handling geometric and physical constraints can effectively improve the visual effect. How to effectively introduce those physical prior into the learning is a promising direction for the 3D motion transfer.
Please let us know if you have more questions or concerns; thanks!
---
Rebuttal Comment 1.1:
Comment: As the reviewer gKzm says, I wish authors provided video results on animals and hands.
I still wish the paper compares against other correspondence learning techniques since the paper is titled "Neural Correspondence Learning." I disagree with the authors' comment "one can still achieve robust motion/pose transfer without the need for correspondence alignment." The pose transfer problem boils down to the correspondence problem. I suggest authors to change the paper title if there will be no evaluations in terms of the correspondence.
I also suggest authors cite Neural Jacobian Fields [Aigerman et al. 2022] and consider comparisons and discussions.
For now, I will keep my score but I feel the paper is weaker than my initial review.
---
Reply to Comment 1.1.1:
Comment: Many thanks to your reply!
Since the rebuttal due is ending soon when we receive your latest comment, we cannot provide the animals and hands video results on time. But we will prepare a video link demonstrating the video results on animal and hands in one or two days.
Regarding correspondence learning, we will conduct a preliminary quantitative evaluation by comparing the reconstruction error of our work with other methods to showcase correspondence learning ability. The experimental results will be provided in one or two days.
We appreciate the reviewer mentioning Neural Jacobian Fields, the work we were also interested in. Intead of using generative networks like ours, the Neural Jacobian Fields uses mesh-specific, basic linear-algebra operations to an intrinsic field of matrices defined over the tangent spaces to preserve the detailed geometry. It's very computational efficient and effective. However, it relies heavily on discrete
differential geometry operators, which needs rigorous processed watertight meshes, while our LART use simple LMR to constrain the deformation which is more capable to complex inputs such as 4D raw scans. We will add discussions about Neural Jacobian Fields in the revised version. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers and AC for dedicating your time and expertise to assess our manuscript thoroughly. We are glad by the positive remarks from the reviewers on various aspects of our work (the novelty of LART, the versatility of the proposed method, the robustness of handling noisy input, the generalization ability, and the paper being well written).
We have taken each reviewer's suggestions and comments into consideration and have made comprehensive replies in the rebuttal, including a further explanation of specific technical details, addressing some ambiguous issues, and as well as various visualized experimental results requested by the reviewers.
We have provided individualized responses to each reviewer's comments in the box below. The extra experimental results are attached in the anonymous link:
https://we.tl/t-WDuWeIFY0K | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
MIMEx: Intrinsic Rewards from Masked Input Modeling | Accept (poster) | Summary: The paper introduces a novel method for Exploration in RL, called MIMEx (Masked Input Modeling for Exploration).
Previous approaches for exploration investigated intrinsic rewards, usually computed as a measure of a state or transition’s “novelty”, adding them to extrinsic rewards (the actual task’s rewards) to enhance the algorithm’s exploration and thus performance, especially for tasks with sparse rewards.
The present paper argues that those previous intrinsic reward methods can be viewed under a unifying lens as approaches that use pseudo-likelihood estimation to estimate novelty. A general algorithm is introduced called MIMEx that computes pseudo-likelihood over entire trajectories, as the prediction loss of a masked sequence autoencoder. This loss corresponds to the intrinsic reward for an entire trajectory, and can be simply added to task rewards when running any RL algorithm.
The paper then demonstrates the effectiveness of the approach with evaluations, baseline comparisons and ablation studies on a “PixMC-Sparse” benchmark suite.
Strengths: The paper is very clearly and naturally presented, with great care put into arguing for a coherent story. The work is original, arguing for a straightforward but interesting generalization of existing concepts in the RL literature.
The natural blending of theoretical justification for the work (pseudo-likelihood as novelty), application of recently popular techniques (masked sequence modeling) and experimental results make the paper highly significant. The generalization of the concept of intrinsic rewards to entire trajectories is also extremely interesting.
The experimental results are thorough, including evaluation on different benchmark settings, comparisons with other approaches, and extensive ablation studies. In particular, the ablation study showing how scaling of the autoencoder transformer affects results seems very interesting, and should warrant further investigation.
Weaknesses: It seems that adding a whole masked autoencoder to compute intrinsic rewards may incur a hefty computational cost. What is the overall runtime / resource utilization of MIMEx compared to other baselines? Even if MIMEx is more computationally demanding, it can still be a good choice due to sample efficiency, especially for offline RL tasks. In any case, the paper would greatly benefit from such an analysis.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: A few points:
* **[a]** Can the authors address the point raised in “Weaknesses”, related to computational cost?
* **[b]** In Section 5.3, “Trajectory-level exploration”, specifically in Figure 4: Why does a sequence length of 6 reduce performance? From the authors’ presentation, it may seem that increasing trajectory length should in principle always guarantee a benefit. Why is there instead an optimum at lengths shorter than 6?
* **[c]** Minor point: For all plot figures, font sizes are too small.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I believe the authors appropriately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the very positive feedback. We appreciate your evaluation of our work and address the questions below.
**”What is the overall runtime / resource utilization of MIMEx compared to other baselines?”** (also **Questions [a]**)
Thank you for the suggestion. We have included tables for both the wall-clock time and GPU memory utilization of MIMEx compared to other baselines in the global PDF, and will add this to our revised paper. While MIMEx could be more computationally demanding, the overhead in both wall-clock time and GPU memory usage is comparable to those of the baselines, so we also agree that its benefit in sample efficiency could outweigh the computational challenge.
**Questions**
**[b]** This is an interesting observation that we noticed too. We hypothesize that there is always a “sweet spot” that achieves the optimal balance of this tradeoff, rather than the longer sequence length the better: while longer exploration sequences may encourage more complex exploratory behaviors and tackle harder exploration problems, using longer exploration sequences in MIMEx may also increase the variance of prediction task and worsen performance. Our results in Figure 4, where MIMEx with exploration sequence length 6 underperforms MIMEx with shorter exploration sequence length, empirically confirm this hypothesis.
**[c]** Thank you for the suggestion. We will increase the font sizes of all figures in our revised manuscript.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thank you for your rebuttal. The wall clock time and memory usages are indeed impressive.
I don't have any additional comments, and maintain my score as previously stated.
---
Reply to Comment 1.1.1:
Comment: Thank you so much again for the comments. | Summary: This paper proposes to use masked autoencoding (similar to MAE) in RL and use the loss as an intrinsic reward for exploration in sparse reward domains. Their method, MIMEx, does masked reconstruction of latent observation o_t, based on the previous T observations, and assigns the loss as intrinsic reward r_t. They propose that count-based or prediction error-based intrinsic rewards can be viewed in the same masked autoencoding framework, as specific inputs and masks (such as ICM and RND). They show experiments in DM Control Suite and a harder version of PixMC denoted as PixMC-Sparse, which sparsifies the shaped reward. They show that MIMEx outperforms ICM and RND baselines.
Strengths: Experiment results suggest it is stronger than ICM and RND, and the method itself is relatively straightforward. The paper does clearly demonstrate evidence that this is a viable method for exploration.
Weaknesses: The formulation of masked autoencoding and trying to express other methods like RND and ICM in terms of it is not a very strong argument. One of the key parts of RND is the usage of a fixed random network, which is not properly captured by a varying mask; it is captured by allowing an arbitrary transformation of the inputs. For ICM, the representation learning part from inverse control is also not captured by the formulation, but also delegated to a transformation of the inputs. Representation learning is often a key part of an exploration algorithm, which MIMEx does not directly capture. By allowing an arbitrary transformation of the inputs, the actual masking portion of MIMEx becomes less important, since masking can equally be delegated as a different transformation of the inputs, resulting in only needing a trivial mask for MIMEx. I think there should be less emphasis on this part for the paper, as the experiments also do not investigate this idea further. An example of further investigation would be to try to emulate RND/ICM or other exploration approaches through specific masks (not just random masks) and showing whether specific masks may be better or worse than uniform random masking.
I believe your reference [14] Byol-explore, does incorporate sequence-level information in its prediction error for intrinsic reward, which is not mentioned in related work.
---- After Author Rebuttals ----
After reading other reviews and the author rebuttals, I think the authors have addressed the addressable parts of my concerns, as well as have stated that they will clarify their main claim to be more nuanced. Thus I'm inclined to slightly raise my score.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: MIMEx seems to be encoding and decoding latents (as opposed to pixels), so are you allowing gradients from MIMEx to also flow back through to those latents? I.e. are you stopping gradients from the input/target sides? If not stopping gradients, then is there a danger that the representation will collapse to a constant? If yes, then are you relying on the RL to train the latent representations?
In your Figure 1, MIMEx only receives latent observations as input/target. Why are actions not included? Many forward model predictions rely on state and action information for prediction, including ICM. What was the reasoning behind this? If actions were included, then MIMEx would be able to more closely emulate many multi-step prediction methods such as [14] Byol-explore or SPR (Data-Efficient Reinforcement Learning with Self-Predictive Representations https://arxiv.org/abs/2007.05929) by picking a mask the only masks out observations but keeps around actions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper addresses some of the limitations, but another potential limitation for this method is partial observability, especially for long horizon problems. Only using the previous T observations as input might not be a rich enough context for predicting the next observation, so there would need to be another way of adding more past information.
Another potential limitation is stochasticity. Since MIMEx is predicting observations, it will suffer from high loss if the observations are stochastic, similar to many other prediction-error-based approaches that try to directly predict the observation. RND avoids the issue by same-step prediction, while ICM relies on the representation learning of inverse control to filter our dynamics-irrelevant noise. With MIMEx, it seems like even adding white noise to observations could end up with high losses everywhere.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback.
**Strengthening the masked autoencoding argument**
Thank you for the suggestion. We agree that our method only generalizes the conceptual formulation of RND or ICM as summarized in Table 1, and does not capture differences in representation learning. We will clarify this in the revised paper. To further highlight the importance of having a flexible mask distribution, we ran additional experiments where we emulate RND/ICM through specific masks.
Specifically, we interpolated between the MIMEx masking distribution and a masking distribution closest to ICM/RND. Experiments with the following exploration sequence length and mask distributions are added:
- l5_uniform_70 - seq length 5, uniformly random mask, 70% [MIMEx]
- l5_fixed_70 - seq length 5, fixed mask, 70%
- l5_fixed_50 - seq length 5, fixed mask, 50%
- l2_fixed_50 - seq length 2, fixed mask, 50%
(For “fixed mask”, we only always mask out the last X positions of the input sequence based on the mask ratio, effectively making the reconstruction problem a future-state prediction problem.)
In the global PDF, we present results on the *KukaReach* environment, each curve with 3 seeds. We observe that our method performs better than the masking strategy corresponding to RND/ICM, indicating that the flexible mask distribution enabled by our method is beneficial:
1. Performance drops when going from uniformly random mask to fixed mask when keeping exploration sequence length the same.
2. Performance drops further when reducing the length of exploration sequence from 5 to 2. This is consistent with our ablation results presented in Fig 4.
3. The MIMEx variant with mask distribution closest to existing one-step exploration approaches, “l2_fixed_50”, performs the worst out of all the settings evaluated, which confirms our hypothesis that the additional flexibility in tuning masking distribution provided by MIMEx framework can positively contribute to its performance.
We believe that adding representation learning techniques such as inverse models from ICM is possible in our framework and is an interesting direction for future work. We will include the additional results and update the writing in our revised paper.
**Reference to BYOL-Explore**
BYOL-Explore indeed incorporates sequence-level information in its prediction error, though being very different from our work in both how the information is processed (by using a recurrent neural network) and used (to build the agent’s internal representation used for future prediction). We will include BYOL-Explore in the related work section of our revised paper and clarify these differences.
**Questions**
Regarding how pixel observations are encoded into latents, we follow the implementation of each respective baseline (MVP [41] for PixMC-Sparse and DrQv2 [42] for DMC). Specifically, for PixMC-Sparse, the pixels are encoded into latents via a frozen pretrained ViT model into latents; for DMC, the pixels are encoded into latents via a convolution network trained online with random shift data augmentation. In the former case, the representation does not collapse since a frozen encoder model is used; in the latter case, RL is used to train the latent representations and the random shift augmentation is used to prevent collapse. Empirically, each representation learning method has been state-of-the-art in its corresponding environment.
Regarding including action in the prediction objective, we chose to mask only the latent observations because doing so would make the model more general and easier to tune (as the data modality being masked is kept consistent). We also reasoned that since we mask a history of observations, it is possible that adding action information results in more redundancy as some action information can be implicitly inferred. However, we think adding action is an interesting idea and will look into this design choice in our follow-up work.
**Limitations**
*partial observability*
Thank you for your suggestion. We agree that partial observability could be an important challenge to RL exploration methods, and that providing more information to the agent will help alleviate the challenge. Transformer-based models have been used to handle longer context length than recurrent models and to address partial observability (e.g. Chen’21, Reed’22, Brohan’22); one strength of MIMEx is therefore that it could be flexibly extended to include more data modalities or handle longer context length. Recent works have reported positive results where masked autoencoding on more general trajectory data is used for representation learning (e.g. Radosavovic’23); this is a promising sign that MIMEx could benefit from richer data too and we are excited to pursue this direction in our follow-up work.
Brohan, Anthony, et al. "Rt-1: Robotics transformer for real-world control at scale." arXiv preprint arXiv:2212.06817 (2022).
Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." Advances in neural information processing systems 34 (2021): 15084-15097.
Radosavovic, Ilija, et al. "Robot Learning with Sensorimotor Pre-training." arXiv preprint arXiv:2306.10007 (2023).
Reed, Scott, et al. "A generalist agent." arXiv preprint arXiv:2205.06175 (2022).
*stochasticity*
Indeed, for now we are tackling this problem by averaging multiple masks, and for PixMC-Sparse implementation we to some extent delegate this representation learning problem to the frozen pretrained ViT encoder. We do not focus on representation learning in the scope of this work, but will update our writing in Section 7 to include a discussion of this important problem in our revised paper.
---
Rebuttal Comment 1.1:
Title: Masking Distributions and Actions
Comment: Thanks for the response to my questions! I appreciate the response and additional ablation on the masking distribution, which does give more insight into its importance.
However I do want to emphasize my point that without actions, MIMEx is unable to properly capture many intrinsic reward methods. The main reason I am re-emphasizing this point is because this paper is making a very strong claim for the expressiveness of the MIMEx framework, which is only partially true. For example in Section 3, line 99: "Inspired by this perspective, we propose Masked Input Modeling for Exploration (MIMEx), a unifying framework for intrinsic reward methods...". The term "unifying" is used multiple times throughout the paper and abstract. MIMEx generalizes 1-step to multi-step with very flexible mask distributions, which is its strength. However without actions, MIMEx fails to model any 1-step dynamics model type of intrinsic reward, including ICM. In your ablation of [l2_fixed_50 - seq length 2, fixed mask, 50%], if you are trying to do 1-step next state prediction, having the action vs. not having the action are fundamentally different problems, and MIMEx can only model one version. Thus I think this claim of MIMEx being a "unifying" framework is too strong, and it would be better to make a more nuanced claim such as "generalized framework for state-based multi-step intrinsic reward".
---
Reply to Comment 1.1.1:
Comment: Thank you so much for elaborating on your point. We now understand your concern on the claim of MIMEx’s expressiveness and agree. Indeed, while MIMEx can be naturally extended to include action (by masking on a history of not only observations but also actions), we did not investigate this idea explicitly within the scope of this submission. We will tone down the writing in our next revision to make the claim more nuanced, in particular regarding the word choice of “unifying” (e.g. replace it with words like “generalized”). | Summary: The paper introduces a novel approach to exploration in reinforcement learning (RL) called Masked Input Modeling for Exploration (MIMEx). MIMEx uses a masked autoencoding objective on variable-length input sequences to derive intrinsic rewards for exploration. The paper claims that MIMEx improves exploration efficiency in sparse-reward tasks and show that MIMEx outperforms claimed competitive baselines on tasks from the PixMC-Sparse suite and the DeepMind Control Suite except for in a few cases, such as certain cases in which the reward is not sparse.
Strengths: **Originality**
The paper introduces a fresh approach to exploration in reinforcement learning, applying existing concepts in a new way. This application of masked autoencoders and pseudo-likelihood estimation to the problem of exploration in sparse reward environments is at least innovative.
**Quality**
The paper is well-structured and provides a comprehensive examination of the proposed MIMEx method. The paper creates a new benchmark dataset from an existing one with non-trivial modifications, and explains these. The paper has conducted extensive experiments and provided a detailed ablation study, which adds to the quality of the paper. The paper does an adequate job at discussing it's limitations and potential future directions for its work.
**Clarity**
The paper is very well-written and mostly clear in its presentation. The paper has done an excellent job of explaining the MIMEx method, its implementation details, and the experimental setup. The explanation of dense rewards for the PixMC environment, the construction of the PixMC-Sparse, and the use of figures and tables to illustrate the results is superb.
**Significance**
MIMEx shows superior performance to certain benchmark exploration algorithms, __random action noise__, __intrinsic curiosity module__, and __random network distillation__, some of which have been shown to perform well at the time of their publication on long-horizon sparse reward tasks, such as Montezuma's Revenge. The model allows for easy adjustment of the time horizons it considers, though tuning this hyperparameter may be difficult. MIMEx is agnostic to standard RL algorithm and model choice, allowing it to be used on several problems.
---
Overall, this paper is easy to read and combines ideas into a novel exploration method that could be useful. However, it's unclear to me the generalizability of most of the results and the increased compute, memory, and wall-clock expenses wrt the horizon needed for the MIMEx module.
Thank you for providing code with your submission for both sets of benchmarks!
**I have changed my score from a 5 to a 6 as a result of the authors rebuttal, in which they show evidence of additional work that addresses the key concern I had as well as additional results clarify ambiguities I had. Thank you, Authors. I'm happy that my rebuttal helped you improve your work and that you considered and addressed my concerns by performing additional work to improve your work and sharing these results clearly in your rebuttal.**
Weaknesses: The paper's main idea, while innovative in its application, is built upon existing concepts in the field, such as masked autocoders and prediction error based exploration. It uses standard latent embeddings of observations and then reconstruction of masked embeddings to compute the prediction error, which is work that has been done for years, a recent example of which is [1], work that the paper transparently references (this transparency is a strength in clarity and overall). The paper could have better highlighted the unique aspects of their approach and how it diverges from or improves upon existing methods in reinforcement learning exploration. Though PixMC-Sparse gets a discrete bonus in some environments for reaching a certain intermediate state, the paper could've evaluated MIMEx on easily accessible or created environments by easily modifying existing ones that focus on more diverse types of sparse rewards, such as discrete instead of continuous (wrt to goal distance) rewards.
- The paper's quality could be improved with a more diverse set of experimental tasks and environments. The current selection of environments for which hyperparameter and ablation studies were done do not generalize to much outside of their specific domains, so they don't demonstrate the generalizability of the MIMEx method. However, I do understand that these experiments can be computationally expensive, but I would recommend performing additional ablations on environments that differ significantly from Kuka_____ in order to maximize confidence in the generalization of the results of these ablations.
- In lines 118-119, the paper states that "MIMEx can be added...as a lightweight module..." but there is no mention, let alone evaluation wrt horizon $T$ of the increased memory footprint or wall-clock time needed to calculate the intrinsic reward.
- Later exploration methods [2, 3] that are catered toward sparsity or perform better than the baselines presented are not used as baselines. But, to the paper's defense, [3] was published just Sep 2022. However, the issue is that the most recent baseline used is RND, which was published in 2018, and there are many better ones that exist to evaluate MIMEx against.
- The paper could better justify the usage of each of the exploration baselines used for readers that may not be familiar with RND's performance on Montezuma's revenge though it does do this somewhat in the Related Works section.
- Again, while the paper shows promising results on specific tasks, its impact could be limited if the method's effectiveness doesn't extend to a broader range of tasks and environments. The paper could increase its relevance and significance by demonstrating MIMEx's effectiveness in more diverse scenarios and discussing potential applications beyond the current scope.
[1] Xiao, T., Radosavovic, I., Darrell, T., & Malik, J. (2022). Masked visual pre-training for motor control. arXiv preprint arXiv:2203.06173.
[2] Zhang, T., Rashidinejad, P., Jiao, J., Tian, Y., Gonzalez, J. E., & Russell, S. (2021). Made: Exploration via maximizing deviation from explored regions. Advances in Neural Information Processing Systems, 34, 9663-9680.
[3] Eberhard, O., Hollenstein, J., Pinneri, C., & Martius, G. (2022, September). Pink noise is all you need: Colored noise exploration in deep reinforcement learning. In The Eleventh International Conference on Learning Representations.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Should the equality in the display mode equation in line 84 be an approximation since $|X|$ is not necessarily infinite?
2. Referring to lines 204-206, what are the internals of the transformer blocks? Could you clarify because not all transformer blocks are in the same order or contain the same properties as [4].
3. Why choose the three chosen exploration baselines, random action noise, intrinsic curiosity module, and random network distillation?
4. I'm confused by the inclusion of the sample curriculum in Table 3. Is this only an informative point, or is this something you evaluated?
5. In lines 233-234, the paper states "Trajectory-level exploration To our knowledge, MIMEx is the first framework that successfully incorporates sequence-level intrinsic reward to solve hard exploration tasks." Doesn't [5] do this implicitly as the discount factor in [5] is varied? Again, this is a relatively recent publication.
6. Minor suggestion, to make the results of Figure 2 more tangible and to improve readibility, I suggest providing a visual and explanation of the KukaPick task alongside Figure 2, even though it is in the Appendix.
7. Do you have ideas on extensions of Online masked prediction to offline RL and any advantages or disadvantages using MIMEx with this paradigm?
[4] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
[5] Ramesh, A., Kirsch, L., van Steenkiste, S., & Schmidhuber, J. (2022). Exploring through random curiosity with general value functions. Advances in Neural Information Processing Systems, 35, 18733-18748.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The main limitation is the limited understanding of the generalizability of the results to problem domains, even sparse reward ones, that are different than PixMC-Sparse, which most are.
The paper does address that MIMEx may hurt performance when rewards aren't sparse, but it could provide more limitations on absence of additional memory, compute expense, and wall-clock time needed to add the MIMEx intrinsic reward generation model, which has multiple, multiple-layer transformers and more.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback. We addressed your main concern on the lack of diverse domains in our empirical study through experiments on two additional discrete-action environments. Our approach performs competitively against baselines, demonstrating its generalizability beyond continuous control tasks. Regarding your concern on the novelty, we would like to highlight that our main contribution is a novel framework that draws connections between masked autoencoding, pseudo-likelihood, and intrinsic rewards, as well as unifies many existing intrinsic bonus approaches. While each component of MIMEx has been explored in the past, the combination of them is what enables superior results. To the best of our knowledge, we are the first to explore using a random masked prediction objective for intrinsic bonus in the context of RL exploration.
**evaluate on more diverse types of sparse rewards e.g. discrete instead of continuous**
We agree with this point, and would like to clarify that our experimental results have already covered (1) tasks with a mix of discrete and continuous rewards and (2) tasks with only discrete rewards. Examples of the former: Pick, quadruped_run; examples of the latter: Reach, cartpole_swingup_sparse. We will add a table summarizing the detailed reward terms and indicating each reward term’s type (discrete/continuous) in our revised manuscript.
**more diverse tasks and environments**
Thank you for the suggestion. To check if MIMEx generalizes to other domains, we ran additional experiments to compare MIMEx with ICM and RND on the ALE *PRIVATE EYE* and *VENTURE* environments. Below, we report all results with 95% confidence intervals, over 5 random seeds on each method (ICM, RND, MIMEx).
[*PRIVATE EYE*]
| mean episodic return\env steps | 10M | 25M | 50M | 100M |200M|
|----------------------------|-----|-------|-------|---|---|
| RND | -164.8± 279.0 | -188.4 ± 281.9 | -160 ± 413.9 | 4105.8 ± 112.0 | 6036.4 ±7010.1 |
| ICM | -31.4 ±103.7 | 2334.8 ±2095.9 | 7864.6 ±4605.5 | 11963.8 ± 746.1 | 23377.6 ± 9720.1 |
| MIMEx | -529 ± 416.4 | 104.6 ± 658.8 | 2344.8 ±2776.8 | 7859.6 ± 6866.5 | 7717.2 ± 6239.4 |
[*VENTURE*]
| mean episodic return\env steps | 10M | 25M | 50M | 100M |
|----------------------------|------|------|------|---|
| RND | 580 ± 227 | 1080 ± 144 | 980 ± 73 | 1660 ± 159 |
| ICM | 380 ± 243 | 1240 ± 202 | 1400 ± 310 | 1560 ± 133 |
| MIMEx | 460 ± 320 | 1080 ± 114 | 1040 ± 100 | 1660 ± 48 |
Our results for ICM and RND are similar to those reported in prior works. For MIMEx, we only tuned the exploration beta parameter; without extensive hyperparameter tuning, MIMEx still performs comparably with ICM/RND. We hope these results improve confidence on the generalizability of MIMEx to other domains.
We will include the updated results and additional code in our revised work.
**MIMEx as a "lightweight" module**
We used the term “lightweight” to imply programmatic simplicity rather than low memory consumption or wall-clock time. We will remove this sentence in the next revision to avoid confusion.
**later exploration methods not used as baselines**
Thank you for the references; we find [2, 3] relevant and will include them in our revised paper. Still, we find our method and baselines more general in formulation, thus more amenable to fair comparison. The strength of MIMEx is that it is a simple module that generalizes a number of prior approaches that are still commonly used in practice, while being more flexible than these approaches. We therefore compare MIMEx against ICM and RND, two existing works that are general in formulation and strong in performance, to obtain performance difference comes from differences in algorithmic formulation rather than implementation details. An interesting direction of future work would be to use other ideas in intrinsic motivation together with our method, such as [2,3].
**better justify the baselines**
We will update the writing to elaborate on why we chose the baselines (as discussed above).
**potential applications beyond the current scope**
We are particularly excited about applying MIMEx to real-world sensorimotor learning; many challenging robotic tasks are difficult to be specified with a dense reward function (e.g. screwing a water bottle cap where un-grasping and re-grasping are needed). We hope MIMEx’s promising results in simulation could transfer to real-world RL. We will add these discussions in the next revision.
**Questions**
1. $|X|$ is actually always finite here since we define $X$ as categorical variables. In this case, the masking distribution is a discrete uniform distribution and the equality holds.
2. We use transformer blocks as in [4], with the same order and properties.
3. We chose these baselines because of their effectiveness and generality across a wide range of domains and tasks.
4. We include this sample curriculum to illustrate how PixMC-Sparse could be easily modified to avoid saturation as exploration algorithms get more sophisticated. We provide one example of such evaluation in Figure 2 (on *KukaPick*).
5. [5] derives intrinsic rewards through predicting temporally extended general value functions, though being substantially different in terms of the environment it evaluates on (MiniGrid) and the focus (partial observability). We will include this reference in our next revision.
6. Thank you for the suggestion. We will do so in our revised version.
7. In the context of offline RL, online exploration is not possible since offline RL only uses previously collected data without additional online data collection. It is therefore unclear how intrinsic bonus can be applied in that setting.
**Limitations**
We included additional results (see above) and runtime/memory analysis (see global PDF) to address these limitations. | Summary: This work proposed a general framework for deriving intrinsic rewards called Masked Input Modeling for Exploration (MIMEx). This method starts from the observation that existing intrinsic reward approaches are special cases of conditional prediction, where the estimation of novelty can be seen as pseudo-likelihood estimation with different mask distributions. From this perspective, MIMEx derives intrinsic reward based on masked prediction on input sequences, which naturally lends itself to controlling the difficulty of the underline conditional prediction task. Empirically results on eight tasks from PixMC-Sparse and six tasks from DeepMind Control Suite demonstrate that MIMEx outperforms other baselines regarding sample efficiency.
Strengths: 1. This work proposed a general framework for deriving intrinsic rewards, which can be applied to various hard-exploration tasks.
2. The most interesting part of MIMEx is that it enables extremely flexible control over the difficulty of conditional prediction for deriving intrinsic rewards. Section 5.4 provides comprehensive studies to understand how varying mask distribution affects the performance of MIMEx.
3. Extensive ablation studies have been provided to understand why MIMEx works better than other approaches. The comparisons are provided among diverse directions, including trajectory-level exploration, variance reduction, and model scalability.
Weaknesses: 1. The smoothness of writing can be improved, e.g., section 3.1 is not closely related to other parts of MIMEx. The motivation of using masked prediction loss as the intrinsic reward is not clear enough.
2. Line 81 cited a retracted paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What motivates using masked prediction loss as the intrinsic reward?
2. When reading the introduction, I have the following questions: how masked language modeling is connected to pseudo-likelihood? Why approaches that estimate novelty can be viewed as modeling different conditional prediction problems or masked prediction problems with different mask distributions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: 1. One limitation of MIMEx is that it can potentially hurt performance on easier tasks that do not require exploration, as in the case of exploration bonus approaches.
2. Stronger bias from other sources, like the expert demonstrations, is needed to improve the exploration ability of MIMEx further.
------------
After rebuttal
---------------
I would like to thank the authors for addressing part of my concerns. I agree to increase my score slightly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback. We addressed individual comments and questions below, in particular your main concern on the validity of our method due to the citation of a retracted paper. We clarified that the key principle our method depends on is independent of the error in the retracted paper. Please let us know if there are any remaining issues that prevent our paper from getting a better score.
**”The smoothness of writing can be improved.”**
In section 3.1, we describe details of MIMEx’s formulation and architecture in the form of a sequence autoencoder; we see that the section title could be potentially confusing, and will update it in our revised manuscript to “Sequence-Level Masked Autoencoders” to make for a better summary of the section and a smoother transition from the former paragraph. We will also update the writing to strengthen the motivation of using masked prediction loss as the intrinsic reward (as we will elaborate under “Questions”) and improve the overall smoothness in our next revision.
Please let us know if you find any other specific places where writing could be improved, and we will look into improving them accordingly.
**”Line 81 cited a retracted paper.”**
As we noted on line 81 (with a footnote that points to Appendix A.1) in our submission, while [40] was later retracted due to an error, we are aware of the error and our claims remain valid despite the error in cited work. In [40], the authors mistakenly viewed BERT [9] as a Markov Random Field (MRF). While the original goal of [40] was to derive a procedure for sampling from masked language models (MLMs) by viewing them as MRFs, the work also inspired the use of MLM prediction error as a way for scoring sequences when decoding from a language model. The way we propose to use MLMs is similar to the latter, i.e. as a proxy metric for scoring sequence-level predictions of trajectory information. In other words, we do not formally treat the MLM in MIMEx as an MRF, and we do not attempt to obtain conditional distributions from which one generates samples. We also note that a correct energy-based view was later proposed in [13], which does not change the argument that we put forth either. While sampling from an energy-based model is expensive, we only seek to obtain a useful stochastic estimate of the energy function for the purpose of scoring. We will clarify the relationship between our method and [40] further in our next revision of the paper.
**Questions**
**1.** As we discussed in Section 3, the use of masked prediction loss is primarily motivated by its flexibility and effectiveness (as demonstrated by empirical results). Masked prediction can be applied to input sequences with arbitrary length and at arbitrary mask distribution; such a framework naturally lends itself to greater control over the difficulty of the underlying conditional prediction problem. By setting up conditional prediction problems on trajectories, we can obtain intrinsic rewards that consider transition dynamics across longer time horizons and extract richer exploration signals. We can also easily tune the difficulty of the prediction problem, by varying both the input length and the amount of conditioning context given a fixed input length. (Meanwhile, existing approaches framed as conditional prediction often consider one-step future prediction problems, which can saturate early as a useful exploratory signal; longer time-horizon prediction problems capture more complex behavior, but they can suffer from high variance.)
Additionally, we are motivated by the generality and scalability of the masked autoencoding objective. Masked autoencoding relies on less domain knowledge compared to methods like contrastive learning, and has proven success across many different input modalities. We can also leverage standard architectures such as those used in masked language modeling and masked image modeling, for which the scalability and stability have been tested.
**2.** Masked language modeling is connected to pseudo-likelihood through the estimation of conditional distributions. Pseudo-likelihood is a way of approximating likelihood by modeling the conditional distributions of variables given all other variables. Masked language modeling, such as the masked autoencoding objective in models like BERT, can be seen as a form of stochastic maximum pseudo-likelihood estimation, as we illustrated in section 2.3.
In masked language modeling, a model is trained to predict masked or missing tokens in a sequence. The model approximates the underlying joint distribution among variables by modeling the conditional distributions of the masked tokens given the rest of the sequence. By optimizing the model to predict the masked tokens, it effectively estimates the likelihood of the observed sequence. Approaches that estimate novelty can therefore be viewed as modeling different conditional prediction problems or masked prediction problems with different mask distributions; Table 1 provides several such examples. By applying different mask distributions, which determine the pattern of masking in the input data, various aspects of novelty can be captured. For example, masking a subset of state variables can measure novelty in states, while masking current or next-step features can estimate novelty in state transitions. Each approach represents a different way of modeling the conditional distributions and approximating the likelihood of the masked components.
**Limitations**
Thank you for summarizing the limitations that we mentioned in Section 7 of the manuscript. We have been working on improving our exploration framework such that it can be generalizable to scenarios where exploration is not required and draw on stronger bias from sources like expert demonstrations, and will report the method and results in our follow-up works.
---
Rebuttal Comment 1.1:
Title: Reply to author
Comment: Thank you for addressing part of my concerns. I agree to increase my score slightly. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable feedback. Below, we respond to each reviewer individually.
In the PDF attached below, we include additional results on wall-clock time/GPU memory usage (reviewer mFvm, reviewer SVND) and mask distribution ablation studies (reviewer if59).
Pdf: /pdf/3fa7459fbda96c01bf866717e58ccc62af5aa349.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Supported Value Regularization for Offline Reinforcement Learning | Accept (poster) | Summary: This paper studies the offline RL problem, and the authors proposes adding the Support Value Regularization (SVR) in learning Q functions, motivated from the way how CQL add value regularizations. The authors add SVR for all OOD while maintain Bellman update for ID samples. Experiments shows that the SVR-regularized method could outperforms other competitors in D4RL tasks.
Strengths: 1. This paper proposes Support Value Regularization (SVR) on solving offline RL problems. The authors suggest adding extra regularization on OOD points, and utilize importance sampling techniques to calculate the Q-values in OOD region.
2. The superiority of SVR method is demonstrated from experiment studies.
3. Theoretical proof shows the policy improvement could lead to a better policy in each iteration.
Weaknesses: Since the method add regularization to OOD regions, it should belongs to Conservative offline RL methods. It is suggested to compare this methods with other less/mild conservative method. In addition, as the SVR is density-based regularization, the authors is suggested to compare with other density-based Offline RL methods. Some less/mild conservative or density-based offline RL methods are listed below, the authors cites some of them, and it seems better to compared some of them.
[1] Supported Policy Optimization for Offline Reinforcement Learning. https://arxiv.org/abs/2202.06239
[2] Mildly Conservative Q-Learning for Offline Reinforcement Learning. https://arxiv.org/abs/2206.04745
[3] Provably Good Batch Reinforcement Learning Without Great Exploration. https://arxiv.org/abs/2007.08202
[4] A Behavior Regularized Implicit Policy for Offline Reinforcement Learning. https://arxiv.org/pdf/2202.09673.pdf
[5] APAC: Authorized Probability-controlled Actor-Critic For Offline Reinforcement Learning. https://arxiv.org/abs/2301.12130
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Some notations are not clear. In addition, what is $\beta$ in Equation 3? It also does not appear before, though we know it is behavior policy. It is suggested to give an explanation.
2. A key component in this paper is to estimate the support values. The state-action pairs are regarded as ID when $\beta(a|s) >0$, however, when using the Gaussian model to estimate behavior policy, all actions has a density larger than 0 (since gaussian density are larger than 0 everywhere), but clearly not everywhere are in distribution.
3. In addition, when estimating behavior policy $\beta_w$, the simple Gaussian model is suggested. However, as the dimension of state could be large, and the entire Support of the behavior policy relies on the behavior density estimation. A simple gaussian model may not be able to deliver a strong estimator. In section 4.5, the authors give a simple comparison between SVR and SVR-VAE, we suggest to give more detailed analysis.
4. The sampling distribution $u(a,s)$ is selected as gaussian in Section 3.4 with mean $\pi$. Could the authors give extra abalation study on how $u(a,s)$ affects the results? Selecting $u(a,s)$ as gaussian is generally optimal/close-optimal?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See weakness and questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and effort that you are dedicated to providing feedback on our paper and are grateful for the meaningful comments.
**[W] It is suggested to compare with other less/mild conservative method. In addition, it is suggested to compare with other density-based offline RL methods.**
Thanks a lot for the references. We supplement the results of these methods in Table 1 in the one-page PDF (attached to the global response). As the results show, SVR has better overall performance than these methods on both Gym-MuJoCo and Adroit tasks. Compared with less/mild conservative methods, SVR aims to solve a different, more fundamental problem. SVR focuses on which action will be penalized, instead of the strength of penalization that less/mild conservative methods considered.
**[Q1] Some notations are not clear, e.g., $\beta$ in Equation 3.**
Sorry for that. Before Equation 3, $\beta$ appears in Line 75 "a fixed dataset $\mathcal{D}$ collected by some behavior policy $\beta$". We will make it clearer in the latter revision.
**[Q2,Q3] The state-action pairs are regarded as ID when $\beta(a|s)>0$. However, when using the Gaussian model to estimate behavior policy, all actions have a density larger than 0, but clearly not everywhere is in distribution. As the dimension of state could be large, and the entire support of the behavior policy relies on the behavior density estimation. A simple Gaussian model may not be able to deliver a strong estimator.**
Thanks for your comment. The definition of ID state-action pairs is $\beta(a|s)>0$ where $\beta$ is the true behavior policy. However, with an estimated behavior policy $\hat{\beta}$, SVR does not distinguish between ID and OOD actions by whether $\hat{\beta}>0$, as the density of OOD actions is particularly difficult to estimate and the behavioral model error can cause extrapolation error and overestimation in this way. Instead, like Eq.5, SVR penalizes all the actions in action space and compensates for ID actions. When $\hat{\beta}$ is estimated accurately, the regularization effects on ID Q-values cancel out, achieving penalization for all OOD Q-values only (Eq.6). **With model error of $\hat{\beta}$ incorporated, $\mathbb{E}_{a \sim \beta}$ in Eq.5 is not affected and the model error only affects the weight of rewarding true ID actions (the IS ratio in the last term)**. As a result, an imperfect model can make the minimization and maximization for ID Q-values not cancel out well, but have little effect on OOD ones (still penalizing all OOD Q-values). **In conclusion, the behavioral model is not used to distinguish between ID and OOD actions in SVR, and the model error can only lead to some unnecessary changes to ID Q-values and will not generate extrapolation error or OOD overestimation.** In addition, compared with other methods that require the behavior model, SVR is less susceptible to model errors because SVR only needs to query the behavior density of in-dataset (s, a) pairs, thus not requiring much generalization ability of the model, making it relatively easier to estimate accurately.
To empirically investigate SVR under different behavioral model errors, we run SVR using different checkpoints of the behavioral model, which are obtained at different steps in the behavioral model training process. The model error is controlled by the number of steps taken to train the behavioral model. The results are shown in Figure 2 in the one-page PDF. We observe that the performance of SVR increases with the number of training steps of the behavioral model. Notably, the performance of SVR stabilizes at a high level after only 1e2 steps of behavioral model training, where the model has not been adequately trained. It indicates that SVR can achieve good performance with an imperfect behavioral model.
**[Q3] In section 4.5, the authors give a simple comparison between SVR and SVR-VAE, we suggest to give more detailed analysis.**
Thanks for your suggestion. In Section 4.5, we found that the advantages of the VAE estimator are not well demonstrated under the common D4RL datasets. Thus, we conduct additional experiments on bimodal datasets, which are constructed by mixing hopper-expert dataset and another dataset collected by a narrow and highly suboptimal Gaussian policy $N(0,0.04)$. In this case, Gaussian can not model the behavior policy well. The results are shown in Figure 3 in the one-page PDF. Benefit from the flexibility of VAE estimator, SVR-VAE obtains better results.
**[Q4] The sampling distribution $u(a|s)$ is selected as gaussian in Section 3.4 with mean $\pi$. Could the authors give extra abalation study on how $u(a|s)$ affects the results? Selecting $u(a|s)$ as gaussian is generally optimal/close-optimal?**
Thanks for your comment. We conducted this ablation in Section 4.4 of the paper, including Guassian with various variance and the uniform distribution. It is shown that SVR is insensitive to $u$ in a wide range. Theoretically, a choice of $u$ is optimal as long as it covers the entire action space. Empirically, Guassian with moderate variance satisfies this requirement and can emphasize the areas where overestimation is most likely to occur (as indicated by the current policy). We only considered Gaussian and uniform distributions as they are the most common samplable continuous distributions. While it is possible to design more complex $u$ (like based on uncertainty estimation), it can be prohibitively difficult or expensive to sample from such complex distributions.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your reply. Your explanation addresses some of my concerns, and I appreciate you for conducting extra experiments.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your feedback and dedication to our paper!
We are happy to have addressed some of your concerns. May we kindly ask if you still have concerns or questions unaddressed? If so, we really want to discuss and address them in the time we have. If our response and additional experiments have addressed your major concerns, would you mind considering increasing your score based on the updated information? Following the valuable suggestions from you and other reviewers, we believe this work has been further strengthened.
Thanks! | Summary: This paper proposes the use of Importance sampling to distinguish between ID and OOD actions, and to operate the corresponding actions accordingly. In addition, as most of the current work is similar, it also uses model to fit behavior policy, but does not need to overly consider the accuracy of the model. It underestimates OOD actions and unbiased estimates ID actions, achieving good performance on D4RL and Adroit tasks.
Strengths: 1. The paper conducts relatively sufficient experiments to demonstrate its viewpoints.
2. The proofs of theorems and propositions are relatively sufficient.
3. The hyperparameters and parameters settings in the paper are clearly defined for easy reproduction.
Weaknesses: 1. Importance sampling is widely used in reinforcement learning because of its unbiased nature. In this paper, the main contribution is to use importance sampling to punish the deviation of OOD action and ID action respectively. The differences or advantages between the proposed SVR method and existing methods should be described clearly.
2. The description of certain experiments should be clearer. For example, what do the colors of the grid represents in Figure 1? In Figures 1 and 2, it is hard to see how the value function described in the paper is represented.
3. The algorithms in the comparative experiment are only some classic algorithms, lacking some algorithms of the same type, such as using importance resampling (Zhang et al. 2023), and there is no mention of the ablation experiments about the penalty coefficient α.
(Zhang et al. 2023) Hongchang Zhang, Yixiu Mao, Boyuan Wang, Shuncheng He, Yi Xu, Xiangyang Ji. In-sample Actor Critic for Offline Reinforcement Learning. International Conference on Learning Representations, 2023.
https://openreview.net/forum?id=dfDv0WU853R
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Recently, there has been a way to use Importance sampling or Importance resampling to determine whether it belongs to OOD actions. Please explain the relationship between your work and the current works, or explain the differences and advantages of your method.
2. In Figure 2, I cannot see that there has a serious overestimation problem with the value function caused by the iteration of the vanilla policy.
3. Please provide a more detailed explanation of Figure 1, such as what the colors of the grid represent.
4. Why does the Walker2d-v2-random in Figure 1 in appendix only run a small portion and not continue to run?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: There is no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and effort that you are dedicated to providing feedback on our paper and are grateful for the meaningful comments.
**[W1, Q1] Importance sampling is widely used in reinforcement learning because of its unbiased nature. The differences or advantages between the proposed SVR method and existing methods should be described clearly. Recently, there has been a way to use Importance sampling or Importance resampling to determine whether it belongs to OOD actions.**
Thanks for your suggestion. Importance sampling's application in RL has a long history. Most of them utilize some form of importance sampling to either evaluate the return of a given policy, or to estimate the corresponding policy gradient [1,2]. However, they can have very high variance due to the product of importance weights. In contrast, SVR focuses on value regularization in offline RL and involves only one importance weight. Recently in offline RL, some works use importance sampling (or importance resampling) to realize in-sample learning [3,4]. They formulate the Bellman target with the actions in the dataset (SARSA update), and weigh the update by an importance ratio. Thus, they do not differentiate between ID and OOD actions or perform regularization on Q functions. One limitation of these methods is that, they only deal with in-sample actions (actions in the dataset) and can not take advantage of the generalization ability of neural networks. The goal of SVR is quite different from these works - to penalize all OOD Q-values without affecting ID ones. To the best of our knowledge, our work is the first to leverage importance sampling to achieve proper value regularization in offline RL.
**[W2, Q3] What do the colors of the grid represent in Figure 1? In Figures 1 and 2, it is hard to see how the value function described in the paper is represented.**
Sorry for the unclear description. The colors of the grid in Figure 1 represent the value functions of the corresponding states, and their specific values are indicated by the color bar on the right side of Figure 1. In addition, the gray/white rectangle area in the middle of the map represents a wall.
**[W3] Lack some algorithms of the same type in the comparative experiment, such as using importance resampling [4], and there is no mention of the ablation experiments about the penalty coefficient $\alpha$.**
Thanks lot for your suggestions. We supplement the results of IAC [4] in Table 1 in the one-page PDF (attached to the global response) for comparison. It shows that SVR has better performance than IAC on both Gym-MuJoCo and Adroit tasks. In addition, we also conduct an ablation study on the penalty coefficient $\alpha$ in Figure 1 in the PDF. Experimental results indicate that choosing $\alpha \in [1e-3, 1e-2]$ generally induces good performance.
**[Q2] In Figure 2, I cannot see that there is a serious overestimation problem with the value function caused by the iteration of the vanilla policy.**
Sorry for the unclear description. Figure 2 presents the Episode Return obtained by evaluating the learned policy at each iteration, which represents the actual performance of learned policy, rather than learned value function. As mentioned in [W2, Q3], the colors of the grid in Figure 1 represent the learned value functions. It shows that vanilla policy iteration has a severe overestimation of value functions.
**[Q4] Why does the Walker2d-v2-random in Figure 1 in appendix only run a small portion and not continue to run?**
This is because a severe overestimation occurred and we stopped running automatically. Walker2d-random has extremely narrow data coverage and most of the existing algorithms suffer from severe over-estimation, resulting in poor performance (see Table 1 in the paper). While choosing a large penalty coefficient $\alpha$ in SVR can mitigate the over-estimation in Walker2d-random, we don't set a particular hyperparameter for it, as hyperparameter tuning in offline RL should be avoided as much as possible.
[1] Doina Precup. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, 2000.
[2] Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning. International Conference on Machine Learning, 2016.
[3] Yiqin Yang, et al. Believe what you see: Implicit constraint approach for offline multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 2021.
[4] Hongchang Zhang, et al. In-sample Actor Critic for Offline Reinforcement Learning. International Conference on Learning Representations, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their effort in addressing the concerns and providing the additional experiment results.
By replying to W1 and Q1, we can conclude that the main difference between SVR and existing works is that it utilizes the generalization ability of neural networks rather than conducting in-sample learning. However, this approach is similar to conservative learning methods, which penalizes OOD actions without affecting ID actions.
For others, the authors provide more detailed descriptions of the figures and the paper.
After carefully considering the others reviews and the corresponding rebuttals, we decide to maintain the original score.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thank you for your feedback! We are happy to address some of your questions, but we still want to clarify a few things.
Utilizing the generalization ability of neural network is an advantage of conservative methods over in-sample methods. SVR belongs to conservative methods. **The main contribution of SVR is that it penalizes *all* OOD actions *without* affecting ID actions, which, to our knowledge, is *not* realized by other conservative methods even though they aim to do so**. Benefit from this property, SVR guarantees optimal convergence in tabular MDP and shows impressive performance empirically. Importance sampling is just a component in SVR to realize this idea. | Summary: This paper proposes a new offline RL method in which the popular assumption that the new policy should be close to the behavior poplicy is abandoned and just penalizing the OOD action would be OK. Analysis show that the method has the policy improvement property. Experimental results partially verify the effectiveness of this 'more simple' method (compared to CQL).
Strengths: The presentatation is clear and the proposed method is easy to understand.
The final results look good.
Weaknesses: 1. By relaxing the behavior clone requirement (first eq of eq(4)) and only penalizing the OOD action, the new policy would have better chance to find a better one. This is reasonable. However, since the search space becomes larger, the learning process should become more difficult. Unfornuately, the experimental results (Table 1) are eager to show this method is good, without illustrating how the reward is accumulated as usually done.
2. According to eq.(7), this method would be even more sensitive to the quality of the learnt behavior policy itself than CQL, while the behavior policy is generlly quite difficult to learn. This may generate extrapolation error. For example, if we utilize a Gaussian approximal behavior policy and the a1, a2 are both the in-sample actions at state s. Then the intropolated OOD a3 between a1 and a2 would likely has a large probability density of the behavior policy, which would introduce extropolation error. Hence the paper should provide more evidence to support the claim that 'SVR is less susceptible to model errors' .
3. Generalization problem. The proposed SVR could be considered as a clipped Conservative Q-Learning, where the part with ID actions is dropped. Then the Q value generated by this method would have the same generalization problem as CQL. For example, if we have learnt an approximal behavior policy perfectly, then the Q value of any perturbed state-action pair (unseen but closed to the dataset) would drop greatly, although we may need a smoother Q function to enhance the robustness [RORL: Robust Offline Reinforcement Learning via Conservative Smoothing].
4. Lack of the experiments that exploring the behavior of the SVR agent. That is, the paper is supposed to show how the SVR behaves at the ID states and OOD states in practice.
5. Although the theoretical analysis demonstrates the performance improvement guarantee of SVR, it deviates from the core problem that this paper attempts to solve, where SVR introduces smaller detrimental changes to in-distribution Q-values.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and effort that you are dedicated to providing feedback on our paper and are grateful for the meaningful comments.
**[W1] The experimental results (Table 1) are eager to show this method is good, without illustrating how the reward is accumulated as usually done.**
We included all the learning curves of SVR in Section B of the appendix. Overall, SVR has a stable learning process and good asymptotic performance.
**[W2] According to eq.(7), this method would be even more sensitive to the quality of the learnt behavior policy. It may generate extrapolation error. The paper should provide more evidence to support the claim that 'SVR is less susceptible to model errors'.**
Thanks for your suggestion. With an estimated behavior policy $\hat{\beta}$, SVR does not distinguish between ID and OOD actions by whether $\hat{\beta}>0$ like Eq.7, as the density of OOD actions is particularly difficult to estimate and the behavioral model error can generate extrapolation error in this way as you mentioned. Instead, like Eq.5, SVR penalizes all the actions in action space and compensates for ID actions. When $\hat{\beta}$ is estimated accurately, the regularization effects on ID Q-values cancel out, achieving penalization for all OOD Q-values only (Eq.6), leading to the optimal solution Eq.7. However, **with model error of $\hat{\beta}$ incorporated, Eq.7 does not become that where $\beta$ is substituted by $\hat{\beta}$**, because in Eq.5, the model error does not affect $\mathbb{E}_{a \sim \beta}$ and only affects the weight of rewarding ID actions (the IS ratio in the last term). As a result, an imperfect model can make the minimization and maximization for ID Q-values not cancel out well, but have little effect on OOD ones (still penalizing all OOD Q-values). **In conclusion, the behavioral model is not used to distinguish between ID and OOD actions in SVR, and the model error can only lead to some unnecessary changes to ID Q-values and will not generate extrapolation error or OOD overestimation.** In addition, compared with other methods that require the behavior model, SVR is less susceptible to model errors because SVR only needs to query the behavior density of in-dataset (s, a) pairs, thus not requiring much generalization ability of the model, making it relatively easier to estimate accurately.
To empirically investigate SVR under different behavioral model errors, we run SVR using different checkpoints of the behavioral model, which are obtained at different steps in the behavioral model training process. The model error is controlled by the number of steps taken to train the behavioral model. The results are shown in Figure 2 in the one-page PDF (attached to the global response). We observe that the performance of SVR increases with the number of training steps of the behavioral model. Notably, the performance of SVR stabilizes at a high level after only 1e2 steps of behavioral model training, where the model has not been adequately trained. It indicates that SVR can achieve good performance with an imperfect behavioral model.
**[W3] The Q value generated by this method would have the same generalization problem as CQL.**
Thanks for your meaningful comment. Recently, some works consider the generalization problem of Q functions, aiming to learn smoother or less conservative Q functions [1,2]. In contrast, SVR aims to solve a different, more fundamental problem. SVR focuses on which action will be penalized, instead of the strength of penalization that these methods consider. In addition, these methods that improve the generalization of Q-networks can also be combined into SVR in practice. For example, substitute $Q_{\mathrm{min}}$ in Eq.16 with a value that is slightly smaller than in-distribution maximum, as MCQ did [2].
[1] Rui Yang, et al. RORL: Robust Offline Reinforcement Learning via Conservative Smoothing. Advances in Neural Information Processing Systems, 2022.
[2] Jiafei Lyu, et al. Mildly Conservative Q-Learning for Offline Reinforcement Learning. Advances in Neural Information Processing Systems, 2022.
**[W4] The paper is supposed to show how the SVR behaves at the ID states and OOD states in practice.**
Thanks for your meaningful comment. The main purpose of this work is to punish OOD actions, which causes value over-estimation. Compared to OOD actions, OOD states cause less severe problems (mainly the state deviation issue during test time [3]), and much fewer works focus on it. Since the contribution of SVR does not lie in handling OOD states, we only investigate the behavior of SVR with respect to ID/OOD actions. Figure 1 in the paper shows that SVR not only avoids taking OOD actions, but also chooses the optimal ID actions.
[3] Hongchang Zhang, et al. State Deviation Correction for Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 2022.
**[W5] Although the theoretical analysis demonstrates the performance improvement guarantee of SVR, it deviates from the core problem, where SVR introduces smaller detrimental changes to in-distribution Q-values.**
We would like to make an emphasis on Theorem 1. According to it, the policy iteration with SVR operator outputs unbiased Q-values for all ID actions and underestimated Q-values for all OOD actions. Benefit from this property, SVR can achieve performance improvement and optimal convergence guarantee.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification and I would keep my previous score unchanged.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thank you for your feedback! If you have any further questions, please post them. We would be more than happy to resolve any remaining questions in the time we have, and are looking forward to engaging in a discussion. | Summary: The authors propose to enforce a new squared penalization term for computing the target Q-values in offline RL. In particular, their penalty tries to only apply to target out-of-distribution (OOD) actions by taking the difference between the importance-sampled and the true estimates of a uniform distribution over actions (or any other distribution will full support on the action space), where the importance-ratio is estimated with a proxy model for the behavior policy. Under idealized theoretical settings, the paper shows that off-policy optimization using this penalty has a fixed point which penalizes only OOD actions and guarantees strict policy improvements. The authors empirically validate their algorithm showing effective performance on a subset of D4RL and a toy maze setting.
Strengths: 1) The paper provides a concise and comprehensive introduction to the off-policy literature and the utilized notation in Section 2.
2) The paper correctly recognizes an important issue of existing popular algorithms in the offline literature: while their stated aim is to penalize behavior only outside the support set of the offline datasets, in practice they resort to density-based heuristic penalties.
3) Overall, the paper is clear and easy to read.
4) The authors provide some analysis and ablations beyond just reporting their method's performance.
Weaknesses: 1) Generalizing some of the mentioned regularization issues to the whole literature would require very comprehensive evidence which the paper does not provide, as it mostly focuses on CQL when analyzing prior work. Hence, I would tone down some of the statements, e.g. lines 26-28"existing value regularization methods not only fall short in penalizing all OOD Q-values but also may introduce detrimental changes to in-distribution (ID) ones" -> " some of the most popular existing value...". Additionally, claims like "SVR guarantees strict policy improvement until convergence to the optimal support-constrained policy" (333-334) should specify that this assumes an idealized setting, and does not apply in the stochastic optimization regime of deep learning.
2) Related to the point above, I believe it would be very appropriate for the paper to mention the assumptions/conditions used to derive the theoretical results in the introduction and abstract to avoid having unbacked claims.
3) A limitation of the method is that it requires access to the behavior's policy density. It would be useful to also analyze the properties of the proposed penalization (theoretically and/or empirically) based on the modeling errors for the trained behavior policy proxy.
4) In practice, CQL's policy is most often also a Gaussian distribution, and, thus, it covers the full support of the action space. Yet, the authors do not seem to consider or mention this property in their analysis. Moreover, since the authors also resolve to using the agent's Gaussian policy with an increased variance for the sampling (action-space covering) penalization distribution, it appears to me that the actual implementation of their algorithm is very similar to existing off-policy methods (e.g., even to CQL).
5) The toy example in Section 4.1 seems quite misleading. From a practical perspective, it seems to me like CQL and SVR are algorithms with similar properties whose performance depends on hyperparameters. Hence, I believe that simply showing one works and the other fails with no context (swept hyperparameters/different collected datasets) is not very informative.
6) While the authors show some extensions of SVR replacing the Gaussian density estimator with a conditional VAE and using self-normalized importance sampling, results appear inconclusive and there is very little analysis done to motivate these findings.
8) Selection of the distribution to cover the whole action space, clearly influences the penalization magnitude of different actions (Equations 5-6), and, therefore, performance. There does not appear to be be any clear principle for choosing this important hyper-parameter for a given problem - apart from looking at online performance, which would break the assumptions of a fully-offline setting.
Minor:
Typos - Line 108: 'in policy improvement stage' -> 'in the policy improvement stage', Line 108: scare -> scarce.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Since also CQL uses a Gaussian policy with full support over the action space, do some of the theoretical claims of SVD generalize beyond this specific algorithm?
- What is the intuition and what are the theoretical consequences of applying SNIS over importance sampling (Section 4.5)?
- Even if considered a standard benchmark in the offline literature, I generally do not find evaluating on a subset of D4RL very informative in terms of empirically properties. Have the authors considered evaluating on additional/more comprehensive offline benchmarks? (e.g. [1])
[1] Gulcehre, Caglar, et al. "Rl unplugged: A suite of benchmarks for offline reinforcement learning." Advances in Neural Information Processing Systems 33 (2020).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors quickly dismiss the constraint of having to model the behavior policy by stating that "we empirically find that the Gaussian model can usually induce excellent performance, which only takes two minutes for pre-training." (Lines 337-338) This claim should be contextualized, as a shallow Gaussian policy is most definitely not sufficient beyond the considered toy settings and the relatively simple D4RL benchmark.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and effort that you are dedicated to providing feedback on our paper and are grateful for the meaningful comments.
**[W1] I would tone down some of the statements, e.g. lines 26-28"existing value regularization methods" -> " some of the most popular existing value...".**
Thanks for your constructive comment. We used "existing value regularization methods" because we tried our best to summarize the existing value regularization algorithms in the related work section, and to the best of our knowledge, few works achieve the original purpose of value regularization - to penalize all OOD Q-values without affecting ID ones.
**[W1, W2] Mention the assumptions/conditions used to derive the theoretical results in the introduction and abstract.**
Thanks a lot for pointing it out. We analyze the policy iteration with the SVR operator in the tabular MDP setting, which is common in the analyses of offline RL algorithms [1,2]. We will make sure to add it in the latter revision.
[1] Seyed Kamyar Seyed Ghasemipour, et al. EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL. ICML, 2021.
[2] Jiafei Lyu, et al. Mildly Conservative Q-Learning for Offline Reinforcement Learning. NeurIPS, 2022.
**[W3] It would be useful to also analyze the properties of the proposed penalization (theoretically and/or empirically) based on the modeling errors for the trained behavior policy proxy.**
Thank you for your valuable suggestion. To empirically investigate SVR under different behavioral model errors, we run SVR using different checkpoints of the behavioral model, which are obtained at different steps in the behavioral model training process. The model error is controlled by the number of steps taken to train the behavioral model. The results are shown in Figure 2 in the one-page PDF (attached to the global response). We observe that the performance of SVR increases with the number of training steps of the behavioral model. Notably, the performance of SVR stabilizes at a high level after only 1e2 steps of behavioral model training, where the model has not been adequately trained.
Theoretically, in SVR, the error of the behavioral model $\beta_\omega$ only affects the weights of rewarding ID actions (see Eq. 16). Thus, an imperfect model can make the maximization and minimization of ID Q-values not cancel out well, but have little effect on OOD ones (still penalizing all OOD Q-values). We hypothesize that this is the reason why SVR can achieve good performance with an imperfect behavioral model.
**[W4, Q1] Since CQL also uses a Gaussian policy with full support over the action space, do some of the theoretical claims of SVR generalize beyond this specific algorithm?**
In our analyses of CQL, the policy $\pi$ can be any distribution, including Gaussian covering the full action space. According to Proposition 3, $\mathcal{T}^\pi_{CQL}$ is a contraction operator only if $supp(\pi) \subseteq supp(\beta)$. This is consistent with the analysis in CQL's paper, which also assumes this strong assumption. On the one hand, Gaussian policy covering the full action space does not satisfy this assumption of CQL. On the other hand, even for $\pi$ satisfying the assumption, the fixed point may underestimate or overestimate Q-values in a complicated way (Proposition 3). Therefore, the theoretical claims of SVR cannot be extended to CQL. In addition, from a practical perspective, the Guassian policy in CQL is usually very narrow and hardly covers the action space.
**[W4] It appears to me that the actual implementation of their algorithm is very similar to existing off-policy methods (e.g., even to CQL).**
Yes, our implementation is similar to CQL but with two main differences: much wider penalization for all Q-values and weighted maximization for ID Q-values.
**[W5] In the toy example, I believe that simply showing one works and the other fails with no context (swept hyperparameters/different collected datasets) is not very informative.**
Sorry that we miss some experimental details. In the toy example, we were verifying the support-constrained optimality of the SVR operator in the tabular MDP, and the actual updates of CQL and SVR are like Eq.4 and Eq.7 respectively, which are quite different. We ran experiments over 5 random seeds that affect dataset collection and policy/Q initialization, and the learning curves are shown in Figure 2. We also swept the hyperparameter, but the results were similar (SVR converged, CQL did not). We will add the details in the latter revision.
**[W6, Q2] While the authors show some extensions of SVR replacing the Gaussian density estimator with a conditional VAE and using self-normalized importance sampling, results appear inconclusive and there is very little analysis done to motivate these findings. What is the intuition and what are the theoretical consequences of applying SNIS over IS?**
SVR-VAE: We find that the advantages of VAE are not well demonstrated under the common D4RL datasets. Thus, we conduct additional experiments on bimodal datasets, which are constructed by mixing hopper-expert dataset and another dataset collected by a narrow and highly suboptimal Gaussian policy $N(0,0.04)$. In this case, Gaussian can not model the behavior policy well. The results are shown in Figure 3 in the one-page PDF. Benefit from the flexibility of VAE estimator, SVR-VAE obtains better results.
SVR-SNIS: Theoretically, SNIS is biased, but the bias is small, and the improvement in variance makes it a preferred alternative to IS sometimes [3]. The results in Section 4.5 show that SVR-SNIS performs comparably to SVR in most tasks but worse than SVR on hopper-med, probably due to the bias.
[3] Art B. Owen. Monte Carlo theory, methods and examples. 2013.
**Due to the page limit, please refer to the global response block on the top of this page for the remaining responses. Thanks!**
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: I thank the authors for acknowledging and responding to my main concerns, and I hope they will include the promised changes in future revisions. After also reading the other reviews, my overall assessment of the paper is still positive, although I do not have a very strong opinion given the incremental nature of the work and the modifications required from the submitted version.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thank you for your feedback! We really appreciate your suggested modifications to make the statements more rigorous. However, we still want to make a few clarifications.
We agree with you that "The paper correctly recognizes an important issue of existing popular algorithms in the offline literature". The main contribution of SVR is that it penalizes *all* OOD Q-values *without* affecting ID ones, which, to our knowledge, is not realized by other value regularization methods (conservative methods) even though they aim to do so. Benefit from this property, SVR guarantees support-constrained optimal convergence in tabular MDP and outperforms prior methods by a large margin. Thus we believe both in theory and in practice, SVR is not an incremental work and may indicate a direction for subsequent value regularization works in offline RL.
Following your suggestions, we believe that we have made a great effort to provide all the experiments that we can (behavioral model error, bimodal datasets, RL unplugged benchmark), and the results further demonstrate the superiority of SVR. We sincerely appreciate it if you could re-evaluate the contribution of this work. We will make the contributions of this work clearer in the revision. Thank you very much for your time and efforts. | Rebuttal 1:
Rebuttal: ### **Global Response**
We thank all the reviewers for the insightful comments and suggestions. We are greatly encouraged by the positive comments of reviewers, e.g.,
* The paper correctly recognizes an important issue of existing popular algorithms in the offline literature. (CX3L)
* The presentation is clear and the proposed method is easy to understand. (DikP)
* The proofs of theorems and propositions are relatively sufficient. (c3nj)
* The superiority of SVR method is demonstrated from experiment studies. (2cRj)
Meanwhile, we have made every effort to address all the reviewers' concerns and responded to the individual reviews below. We have also uploaded a **one-page PDF** (attached to this response) that contains the additional experiment results. Summary of the PDF:
* Comparisons with additional baselines on the D4RL benchmark in Table 1.
* Experimental results on a subset of the RL unplugged benchmark in Table 2.
* Ablation results on the penalty coefficient $\alpha$ of SVR in Figure 1.
* Experimental results of SVR under different behavioral model errors in Figure 2.
* Comparisons between SVR-VAE and SVR-Gaussian on the bimodal datasets in Figure 3.
We hope our response could address the reviewers' concerns. We would be more than happy to resolve any remaining questions in the time we have, and are looking forward to engaging in a discussion.
---
### **Additional Response to Reviewer CX3L (Part 2 of 2)**
**[W7] There does not appear to be any clear principle for choosing this important hyper-parameter (the sampling distribution $u$) for a given problem - apart from looking at online performance.**
We conducted a parameter study on the sampling distribution $u$ in Section 4.4. As shown in Figure 4, SVR is insensitive to $u$ in a wide range. We also propose an intuitive approach for choosing $u$ - Gaussian with the same mean as the current policy to emphasize the areas where overestimation is most likely to occur and with a moderate variance to cover the entire action space.
**[Q3] Have the authors considered evaluating on additional/more comprehensive offline benchmarks? (e.g. Rl unplugged)**
Thank you for your suggestion. The experimental results on a subset of the RL unplugged benchmark are shown in Table 2 in the one-page PDF. We observe that SVR performs better than baseline methods on three tasks and slightly worse on one task.
**[L1] (Lines 337-338) This claim should be contextualized, as a shallow Gaussian policy is most definitely not sufficient beyond the considered toy settings and the relatively simple D4RL benchmark.**
We apologize for the unclear statement. While the Gaussian behavioral model performs well empirically in D4RL, it can be intuitively problematic when dealing with datasets with complex distributions. As mentioned in [W6, Q2], we conduct additional experiments on the bimodal datasets, where SVR-VAE outperforms SVR due to the flexibility of the VAE estimator.
Pdf: /pdf/4742436f09240d7a61c3e460d5826462a1080fc9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Template-free Articulated Neural Point Clouds for Reposable View Synthesis | Accept (poster) | Summary: The authors present a method for dynamic radiance fields of subjects whose motion can be described by skeletal animation. The method automatically extracts a skeleton using medial axis transform. Further it obtains a object feature point cloud from a pre-trained NeRF and expresses their position as a function of the skeleton via linear blend skinning.
The animatable point cloud (including skinning weights and animation) is optimized using the PointNeRF renderer using losses for RGB, rigidity, animation smoothness and 2D CD against the mask. After optimization, it can be manually reposed and rendered from novel views.
The method is evaluated on the Robots and Blender datasets where it compares favorably against previous work.
Strengths: - Using a point based representation to solve the given task is a good idea and makes sense, given that deformations might be very local and not restricted to a smaller set of rigid parts.
- The ability for automatic part decomposition and reposing are interesting contributions that adds value with respect to other dynamic NeRFs that might produce better visual results.
- The method compares favorably against the most relevant previous work WIM.
- The extracted part decompositions and skinning weights look very good and intuitive
- The paper is well written and very clear.
- The authors clearly discuss limitations, show failure cases and provide ablations
Weaknesses: - The whole work is purely constructive (not many novel conceptual insights) and most parts of the presented method are existing concepts: The method combines NeRF (as TiNeuVox), PointNeRF (adding rotation equivariance), MAT, and ARAP. However, there is novelty in the joint system. I think the positive aspects of this work outweigh this.
- The method in general is pretty close to some of the methods for animatable human radiance fields [80-93]. It differs mostly in that it does not rely on a human template but extracts one from scratch and thus, also works on other subjects. The paper would profit from a comparison with one of them on a human subject though.
- In general, while relevant related work is mentioned, it is not always compared against. Some relevant examples are NeRFies [6] and NSFF [4], which both are known for producing better results than D-NeRF, which is used in the comparisons.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: It does not become clear to me what exactly is done with the features $\mathbf{f}_i$ coming from TiNeuVox. Are they just used as point feature initialization and then further optimized using PointNeRF or are they kept fixed? I can imagine that the PointNeRF renderer could adapt to these features without modifying them. This boils down to the question of which parameters of the model are optimized when, which seems to be not completely clear in the paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations and potential negative societal impact are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your feedback! We answer your specific questions in this document. Please refer to the shared response for the experiment results and for answers to common questions.
**Human Subject Comparison**
We added additional results for our method applied to camera captured human full-body sequences from the ZJU-Mocap dataset [0]. Please refer to the shared response for details and results.
**Further Comparisons**
Thank you for the suggestions, we will extend our tables with additional results of related papers that were previously benchmarked on the Blender dataset, such as Nerfies.
**TiNeuVox Features**
The TiNeuVox features are used as an initialization and then further optimized during the learning process. We added additional ablations in the rebuttal PDF showing that, indeed, fine-tuning of TiNeuVox feature points is only necessary if they are randomly initialized. We optimize the color regressor ($\Phi_c$), density regressor ($\Phi_d$), pose regressor ($\Phi_r$), as well as the point features ($\mathbf{f}_i$), the point raw blend skinning vectors ($\mathbf{\hat{w}}_i$), and global scaling parameter $\alpha$. We will clarify this in the manuscript.
**References**
[0] Peng, Sida, et al. "Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans." Proceedings of the IEEE/CVF CVPR 2021.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for the answers to my question. I appreciate the additional experiments regarding initialization. It is interesting to see that fine tuning of features is not necessary at all, if they are initialized from TiNeuVox. I would have expected that the impact on quality is larger (in a negative sense). It seems to show though, that the feature spaces that PointNeRF and TiNeuVox come up with are quite compatible.
I am not fully satisfied with the responses regarding additional comparisons. I welcome the qualitative results on ZJU-mocap but a quantitative comparison against previous human focused works would still improve the paper, even if the presented method performs worse. In the end, it is to be expected that the presented method will be slightly worse since it solves a harder task (without any human templates involved). It would be interesting though to see how large the gap is. Looking at the qualitative results it seems that the presented method has trouble to fine tune to the real data, leading to smoothing and some artifacts. I agree that it probably is due to issues in the data, e.g. incorrect camera poses. However, this makes a comparison even more important to assess the method.
Regarding Nerfies and NSFF: can you confirm that the same conclusions still hold when you add them to the tables?
All in all, my opinion has not changed in either direction after rebuttal and I tend to keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continuing support. We agree that the initialization experiment is interesting and that it shows that the features and their decoding are robust to the choice of their representation.
We plan to expand our performance comparison with additional time-variant methods. To this goal, we were not able to find author’s benchmarks of NFSS and Nerfies applied to object-centric datasets that are the aim of our method. We could attempt to test these for the camera-ready revision if requested but we do not anticipate them to deviate far from performance of D-Nerf from the same era. Moreover, even a potentially higher reconstruction accuracy would not change our conclusions. This is because NFSS and Nerfies rely on unstructured backward deformation field or hyper-parametric canonical space, both of which are incompatible with the reposing goal. Hence, while the lack of kinematic constraints may allow for a better overfit to data, a good performance of these methods does not challenge our core contribution but instead hints at potential backbone alternatives.
Instead of NFSS and Nerfies, we add additional comparisons to more recent methods Hex-Plane [2] and and K-Plane [3]. Table 1 shows that the reconstruction quality of these methods in the Blender dataset is in a similar range as TiNeuVox [1] shown in our submission. Therefore, the conclusions made in our manuscript remain valid.
**Table 1: Average Performance on *Blender* with More Baselines**
| | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | Reposeable |
|----------------------|-----------------|-----------------|--------------------|-------------|
| D-NeRF [0] | 30.50 | 0.95 | 0.07 | $\times$ |
| TiNeuVox-B [1] | 32.67 | 0.97 | 0.04 | $\times$ |
| HexPlane [2] | 31.04 | 0.97 | 0.04 | $\times$ |
| K-Planes hybrid [3] | 31.61 | 0.97 | - | $\times$ |
| WIM [4] | 23.81 | 0.91 | 0.10 | $\checkmark$|
| Ours | 29.10 | 0.94 | 0.06 | $\checkmark$|
**References**
[0] Pumarola, Albert, et al. "D-nerf: Neural radiance fields for dynamic scenes." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[1] Fang, Jiemin, et al. "Fast dynamic radiance fields with time-aware neural voxels." SIGGRAPH Asia 2022 Conference Papers. 2022.
[2] Cao, Ang, and Justin Johnson. "Hexplane: A fast representation for dynamic scenes." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Fridovich-Keil, Sara, et al. "K-planes: Explicit radiance fields in space, time, and appearance." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[4] Noguchi, Atsuhiro, et al. "Watch it move: Unsupervised discovery of 3D joints for re-posing of articulated objects." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.[5] Shao, Ruizhi, et al. "Tensor4d: Efficient neural 4d decomposition for high-fidelity dynamic reconstruction and rendering." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. | Summary: Authors presents a method to learn articulated model from multi-view video. And demonstrate an ability for efficient learning skeleton pose along as view synthesis model for dynamical structures. Moreover the suggested method drastically improve convergence compared to naive approaches. To extract the skeleton authors apply Medial Axis Transform for initalization and RBF weights for the joints.
The results on the D-NeRF dataset are demonstrate the superiority of this model over predecessor.
Strengths: I appreciate the ideas in the method especially skeleton construction and mask loss.
- The approach doesn’t take into account pre-defined joint structure
- Convergence is quite fast
- Can handle non-rigid motions (in theory)
- This a first method learning articulation model purely from the input images
The description of the method is clear and solid.
Weaknesses: The main idea is interesting, but the comparison is poor.
The idea of skelton initializing is mostly based on the previous works, and the it is not clear how much the final quality degrades, it is necessary to compare it with the ideal human skeleton, for example.
To apply this technique you first have to pre-train a dynamic version of NeRF, that complicates method compared to others.
All experiments are done for a single backbone and extremely limited variety of data
As you mentioned this method assume single object in the centre of the scene.
Simplified model produces unrealistic results (figure 6) the skeleton even cannot cover actions from training.
There are some works (e.g. TAVA) that incorporate idea of non-template methods (fairly they utilize keypoints) that have to be mentioned
Human related baselines can be interesting here.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Why do you need time-dependent rotations?
Does your training time include pre-training of the dynamic scene in Tab 2.
Do you need ARAP and transformation loss, how do they influence on the training.
Have you consider using something different to TiNeuVox method, there are plenty of much faster and accurate versions (e.g. HexPlane, Tensor4D)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Highly relying on the retrained method
Speed and memory consumption for higher resolution
Simplification and skeleton construction itself is based on heuristic and can be very poor on some scenes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your feedback! We answer your specific questions in this document. Please refer to the shared response for the experiment results and for answers to common questions.
**Pre-trained Dynamic NeRF and Single Backbone**
We have added additional results showing that our method can work with arbitrary backbones, if they yield a point cloud. Therefore, an initialization could also be achieved by a static pre-trained NeRFs or other means as in [0]. However, in our use-case we chose TiNeuVox because it can handle settings where a single view at a given time step is available which is the case for the Blender data set.
**Comparison to TAVA**
Thank you for pointing out TAVA [1]. We will discuss it as an additional example of an articulated reconstruction method. While TAVA utilizes keypoints instead of a full template, unlike our method, it still needs a tracked skeleton as an additional input. In our experiments we focus on comparisons to Watch-It-Move as the closest baseline because, like us, it does not need any prior skeletal information.
**Need for Time-Dependent Rotations**
Our method applies an Analysis-by-Synthesis approach. Therefore, we must learn the time-dependent rotations to warp the canonical representation to a time-specific representation to reconstruct the training images. Unlike TAVA we do not have access to tracked skeletons.
**Does your Training Time Include Pre-Training of the Dynamic Scene?**
Yes, all reported times do include the pre-training which is visualized by the horizontal offset of starting points for the blue “Ours” curves in Figure 4.
**ARAP and Transformation Loss**
ARAP enforces per-part rigidity by avoiding excessive weight mixing (please refer to Figure 8a in the main paper). Transformation loss enforces sparsity of joint rotations which helps with pruning in the post-processing step (please refer to Section 5 in the supplements)
**Alternative backbones**
Yes, please refer to the additional results where we show that random point initialization suffices for our method, therefore, Hexplane and Tensor4D could be utilized, too.
**References**
[0]: Xu, Qiangeng, et al. "Point-nerf: Point-based neural radiance fields." Proceedings of the IEEE/CVF CVPR 2022.
[1] Li, Ruilong, et al. "Tava: Template-free animatable volumetric actors." ECCV 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! Since my score is lower than others, I will clarify my position after rebuttal.
I appreciate the comparison on the ZJU-Mocap and ask to add it into the main text as well as different backbones to improve the quality of the work for the readers.
I am still interested in the initialization of the ideal skeleton for humans and comparing the learned LBS with TAVA. Moreover, a lot of implications are made with Blender dataset and can be unrelated to the real data (as demonstrated in the example of ZJU-mocap).
I wouldn't be against the paper, since the idea is interesting and the experiment design satisfies me, although, I would recommend authors to make their conclusions more fair by adding human data into the main text with comparison over relevant baselines.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We will add an experiment to our manuscript, where we explore our method initialized with an ideal human skeleton similar to the one used in TAVA. Furthermore, we will incorporate the ZJU-Mocap experiments into the camera-ready manuscript. We hope that will allow you to adjust the recommendation in the official review form. | Summary: The method reconstructs a reposable Dynamic NeRF of an articulated object from multiview videos. This is achieved by using linear blend skinning (LBS) of an automatically extracted skeleton to represent the deformation from canonical to observation space.
Strengths: Combining LBS kinematics with NeRF appearance model effectively produces compelling reposing results.
Yielding a posable skeleton enables numerous traditional animation workflows.
Weaknesses: A quick literature search yielded https://arxiv.org/abs/2208.14851 which talks about reposing with a different way of avoiding inverse mapping issues. It may warrant a mention in related work. Could that approach be combined with this paper's automatic skeleton extraction (they use a known rigged template mesh) to yield similar results?
Given that the method yields a posable skeleton, the results could have been much more compelling with artist-made or even mocap-driven animations (rather than just blending between a couple of poses).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Thank you for the supplemental video. I think the Ground Truth T-Rex needs to also be animated.
Line 95 mentions Fi_d as a backward-warping MLP, did you mean Fi_b?
Line 98 uses p' in an expression for v_m, was that supposed to be p^c?
Specific descriptions of various MLPs listed in line 101 would be helpful.
In line 103, did you mean to use delta_i instead of just delta?
Is the N in equation 8 (L_arap) the same N as in L_smooth?
How come the simplified skeletons for the man and dinosaur are so sparse, compared to the two robots?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors address the main limitations of their approach, namely that LBS restricts the method to individual rigidly-linked bodies (as opposed to deformable objects or full scenes).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your feedback! We answer your specific questions in this document. Please refer to the shared response for the experiment results and for answers to common questions.
**Relation to Dual-Space NeRF**
Thank you for the suggestion, we will discuss Dual-Space NeRF [0] as an additional example of a template-based method for human bodies as it relies on the SMPL model for reposing. We note that our method differently targets general template-free reposing.
**Reposable Skeleton Animation**
We agree that making our pose representation more artist-friendly for example by directly applying motion captured data is an interesting open problem. We believe that future work in this direction will benefit from our contribution and build additional interfaces on top of it to enable intuitive manipulation of general articulated objects.
**Reposed T-Rex**
We have added an example showing a user-driven opening of the T-Rex’s mouth. Please refer to the PDF attachment Figure 2.
**Specific descriptions of various MLPs**
Thank you for the suggestion, we will add a table specifying the different MLPs in the appendix. We will also make our code publicly available upon acceptance.
**Sparsity of Skeletons**
Our Post-processing step prunes skeleton bones that are not needed to represent motion observed in the input video sequence. Therefore, for sequences with less motion, the skeleton may be reduced to only a few bones. This is the case for the T-Rex scene, for example, which only tilts the full body upward and opens its mouth. We note that this is not a limitation but an intentional design feature that reduces the number of degrees of freedom for a potential animator, and that this step can be omitted if desired.
**Other comments**
We will correct the issues in lines 95, 98, and 103. The N in Eq. 8 and in $\mathcal{L}_\textrm{smooth}$ are indeed identical even though different neighborhood sizes could be considered if required.
**References**
[0] Zhi, Yihao, et al. "Dual-space nerf: Learning animatable avatars and scene lighting in separate spaces." 2022 International Conference on 3D Vision (3DV). IEEE, 2022.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: My rating remains.
A comment regarding "Reposable Skeleton Animation":
You already have the capability of specifying new skeleton poses and rendering a 3D model from them. (quoting from line 231: "smoothly interpolating between user-defined poses").
I was only suggesting to use an animation artist to specify more interesting poses to blend between.
I don't think that's an open problem.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We will add a more complex manually defined animation of a walking “Spot” robot to the manuscript to better showcase abilities of our method. | Summary: The paper presents an approach to articulated view synthesis, introducing the concept of Template-free Articulated Neural Point Clouds. The authors utilize a structure-free point-based NeRF representation which supports forward-warping of canonical objects to any poses through Linear Blend Skinning (LBS). Such a representation allows for the joint optimization of LBS pose parameters as well as dynamic NeRFs in a short training time. The method is evaluated on two datasets and compared with existing methods, such as D-NeRF, TiNeuVox, and WIM, showing superior or comparable performance on dynamic novel view synthesis and novel pose synthesis.
Strengths: (+) The paper provides thorough explanations of the model and experiments.
(+) The paper proposed an effective point-based NeRF representation to support the forward-warping of the canonical space for template-free objects.
(+) The method achieves better novel view synthesis and reposing results in a shorter training time on two datasets compared to existing approaches. The visualized learned LBS weights fields and poses are cleaner than WIM.
Weaknesses: (-) Inconsistent Motivation: The paper's motivation appears to be inconsistent throughout the text. Initially, the primary motivations are identified as 1) **Reposability** and 2) **Efficiency**. However, in the model section, the method is built upon a pre-trained TiNeuVox as initialization, aiming to animate TiNeuVox to new poses. Consequently, the efficiency of the proposed method heavily depends on the backbone/representation used, and the main motivation here seems to be **Reposability**. In the experimental section, the method primarily compares with WIM to demonstrate its efficiency over WIM on novel view/pose synthesis. A more convincing demonstration of the **Reposability** performance of the proposed method would involve comparing it with WIM in __its original training iterations__. To prove the **Efficiency** of the method, it would be beneficial to investigate the influence of different backbones/representations on the proposed method.
(-) Performance and Efficiency: The performance and efficiency of the proposed method do not seem promising compared to TiNeuVox. Given that the method builds upon a pre-trained TiNeuVox as initialization and aims to animate TiNeuVox to new poses, one would expect the method to perform at least similarly to TiNeuVox on the novel view synthesis task. However, as shown in Table 1 of the supplementary material, the method performs worse than TiNeuVox on 5 out of 7 scenes and requires significantly longer training time.
(-) Dependence on Pre-trained Model: The proposed method relies on a pre-trained model as initialization, whereas WIM does not require such a pre-trained model. For a fair comparison, it would be advisable to also compare with a modified version of WIM that is initialized from a pre-trained model. This comparison would provide a more balanced view of the strengths and weaknesses of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Please refer to the weaknesses section.
* How to optimize blend skinning weight vector $w_i$? And what is the motivation for defining another $\alpha$ for scaling the weights?
* $\Phi_d$ should be $\Phi_b$ in line 95 to represent the backward-warping MLP.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * The authors could discuss how variations in image quality, such as resolution, lighting conditions, and the presence of noise, might impact the performance of their method.
* How well does the method perform when applied to longer sequences or more complex scenes?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your feedback! We answer your specific questions in this document. Please refer to the shared response for the experiment results and for answers to common questions.
**Inconsistent Motivation & Comparison to WIM with its Original Training Iterations**
We added additional experiments showing that our methods can work with different backbones. Please refer to the shared response.
As described in our paper, we modified the original scheduler of the WIM to access the full training data after the initialization phase. This was required because the original scheduler would access the full training sequence for the initial 80K iterations (around 11 hours with our GPU). This would invalidate the training curves in Figure 4.
To demonstrate that this does not degrade WIM’s performance, we completed the full original training schedule with 200K iterations (~28h) for the robot ‘Spot’ using our data split. While it did eventually achieve a higher PSNR of 27.9 dB it did not get anywhere close to our 32.4 dB. Furthermore, it did this with almost 3x training time compared to our own method. The same holds for the second “merge” phase of WIM (a total of 350K iterations, ~ 2 days) which results in PSNR of 28.2 dB.
Importantly, the PSNR of 25.9 dB achieved by WIM with its original scheduler after 80K steps (~11h) is comparable to that of our modified scheduler (26.3 dB) after the same number of iterations. This demonstrates that our comparison setup is fair.
Please note that full training of WIM for multiple scenes is not possible within the rebuttal period which further highlights the difference in efficiency. Moreover, we were forced to reduce the batch size from 16 to 8 frames to fit into the 24 GiB VRAM of our RTX 3090 GPU.
**Performance and Efficiency**
Unlike TiNeuVox and other general dynamic NeRFs, our representation follows the Linear Blend Skinning which reduces the dimensionality of the pose space to enable reposing. This inevitably restricts the freedom for fitting the residual deformations that do not follow the low-dimensional model. We believe that explains the gap between the reposable and the non-reposable methods in Table 1 of the paper Supplement.
Furthermore, the canonical volume in TiNeuVox is time-conditioned and, hence, it can lead to temporal changes not described by the deformation field alone [0]. This has been reported to further reduce the fitting error [1] but it goes against our idea of disentanglement between the canonical representation and reposing.
We will make this distinction clear and highlight that our method achieves consistently better performance than the other reposable baseline, WIM. We believe that future research based on our method will bridge the gap by explicitly modeling the additional scene changes as pose-conditioned residuals.
**Dependence on Pre-trained Models**
While we agree that it would be interesting to study a similar initialization for other methods including Watch-It-Move, we note that the design of such an experiment is not obvious. The authors do not propose any way to initialize the representation based on a known density distribution and a trivial modification is not possible because, unlike our method, the Watch-It-Move representation is implicit. Moreover, it features an inherent ambiguity between the per-part ellipsoids $SDF_i(.)$ and the residual SDF MLP $S_\Theta(.)$. We argue that such a study is outside of the scope of the paper as we primarily focus on learning reposability.
To demonstrate that our method is not closely tied to TiNeuVox, we conducted an experiment where we only initialize positions of the feature points but leave all feature values and network parameters random like in [3]. This partially approximates the behavior of WIM which spends the first 10K iterations to recover a coarse static representation from the first few sequence frames. Please refer to the shared response and the attached PDF.
**Blend Skinning Vector and Alpha Scaling**
The blend skinning vectors $w_i$ are optimized jointly end-to-end. The $\alpha$ parameter maps linear point-to-bone distances to a non-linear space, similarly as in [4]. Because $\alpha$ is global and shared by all blend skinning weights for all bones, it distributes gradient updates across the LBS model and reduces within-part variation of the effective skinning weights.
**Impact of Dataset Image Quality, Length and Complexity**
We conducted an additional experiment with the ZJU-Mocap dataset [5]. This is a challenging dataset because it contains longer and more realistic motion sequences and also realistic captured appearance, noise and complex lighting. Please refer to the shared response for details and results.
**Additional comments**
We will correct the typo in line 95.
**References**
[0]: Tretschk, Edith, et al. "State of the Art in Dense Monocular Non‐Rigid 3D Reconstruction." Computer Graphics Forum. Vol. 42. No. 2. 2023.
[1]: Liu, Yu-Lun, et al. "Robust dynamic radiance fields." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3]: Xu, Qiangeng, et al. "Point-nerf: Point-based neural radiance fields." Proceedings of the IEEE/CVF CVPR 2022.
[4]: Yang, Gengshan, et al. "Viser: Video-specific surface embeddings for articulated 3d shape reconstruction." Advances in Neural Information Processing Systems 34 (2021): 19326-19338.
[5] Peng, Sida, et al. "Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans." Proceedings of the IEEE/CVF CVPR 2021.
[6] Li, Ruilong, et al. "Tava: Template-free animatable volumetric actors." ECCV 2022.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for your detailed rebuttal and the additional experiments provided. Your response has addressed several primary questions and concerns raised during the initial review.
I do not hold a strong opposition to the acceptance of this paper given the merits of the ideas and the further clarified experimental comparisons provided in the rebuttal. I recommend incorporating the clarifications and improvements from the rebuttal into the revision.
Best,
Reviewer ZYjY
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment and acknowledgment that important concerns have been addressed in the rebuttal. We hope that this will allow you to adjust the recommendation in the official review form. As suggested, we will add the additional experiments and results into the manuscript wherever the page limit allows. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback, and we will use it to further improve our manuscript. We are glad that all reviewers found the description of our method clear, and that they appreciate that our method does not rely on any pre-defined skeleton (RMHs), offers better novel view synthesis with training time shorter than comparable methods (ZYjY), yields good and intuitive decomposition of object parts (u7Dh) and produces compelling reposing results (bAZS).
We made our best effort to answer the reviewer's questions and we conducted as many additional experiments analyzing the behavior of our method as possible within the limited time window of the rebuttal.
In this shared document we answer the most common points and present our additional experiments which are included in the attached PDF.
**Performance in challenging datasets and with human bodies**
We conducted an additional experiment with the ZJU-Mocap human body motion capture dataset [0]. This is a challenging dataset because it contains longer realistic motion sequences captured using cameras and human actors rather than synthetic renderings. Consequently, it also features a realistic appearance, capturing noise and complex lighting. We train our method on 5 sequences and we use 12 camera views for supervision. Each sequence consists of 490-790 frames, and we train for 320k iterations with a mask weight $w_1 = 0.2$.
The results are presented in the attached PDF in Figure 3. We observe that our method can recover the 3D shape as well as meaningful LBS model and pose animation. However, we do recognize that the image fidelity is perceptually lower than in the case of the synthetic datasets. We attribute this partially to the necessity to model a more complex appearance and partially to known inconsistencies in the dataset described by previous work (see the discussion of imperfect camera poses and inconsistent lighting in the Supplement F of [2]).
**Impact of the Backbone Choice**
While we used TiNeuVox as the backbone in all experiments in the paper, our method is designed to be agnostic to its design. In principle, any dynamic or even static 3D shape reconstruction method could be used to initialize our approach. While we can leverage pre-trained features and network weights obtained from the backbone, our method works well even if only a coarse feature-free point cloud is available. Therefore, we opted to test this as a more challenging and general question in favor of testing any specific backbone alternative.
To this goal, we follow the same training procedure for the Robots dataset as in the main paper but we only initialize positions of our feature points ($\mathbf{p}_i^c$) while keeping their feature values ($\mathbf{f}_i$) as well as the density ($\Phi_d$) and color regressors ($\Phi_c$) random before the start of our training.
We report quantitative results in the supplement PDF Table 1. We observe that the performance is comparable to the full initialization which suggests that any backbone that defines partitioning of the scene volume is an effective initialization for our method. Additionally, we found that increasing the weight of $\mathcal{L}_\textrm{skel}$ is beneficial for performance under these conditions. We will report these findings in the final version.
**Role of Feature Point Tuning**
In our method, we jointly train multiple components of our model in an end-to-end fashion. We experimentally assessed the contribution of fine-tuning the point feature values $\mathbf{f}_i$ in Table 2 of the supplement and we found that it does not have a major impact in the Blender dataset. However, it is important to note that this only holds because the feature values were initialized from an already trained feature decoding model of TiNeuVox. Fine-tuning features is necessary if they are initialized randomly (when using an alternative backbone), as demonstrated in the experiment above. Therefore, we keep the feature fine-tuning as a part of our method for generality, and we will clarify its role in the manuscript.
**Progressive Initialization**
While our previous experiments demonstrated that our method works well with a generic baseline, we conducted an additional experiment to assess how the quality and computational budget of the baseline training affect the performance of our downstream method. Specifically, we measure the effect of the TiNeuVox-backbone pre-train iterations on the initialization of our method in the Jumping-Jacks scene from the Blender data set.
We present the results in Figure 1 of the supplement. Surprisingly, after only 100 iterations of the backbone training, our method already achieves a PSNR score of over 30 dB. This shows that even though such early initialization is very coarse, our method still recovers fine details. However, we acknowledge that our method would not be able to recover functional object parts completely missed during the skeleton extraction phase. Therefore, we opted to conservatively train the backbone for 20k iterations in all our results.
**References**
[0]: Peng, Sida, et al. "Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans." Proceedings of the IEEE/CVF CVPR 2021.
[1]: Noguchi, Atsuhiro, et al. "Watch it move: Unsupervised discovery of 3D joints for re-posing of articulated objects." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[2] Li, Ruilong, et al. "Tava: Template-free animatable volumetric actors." ECCV 2022.
Pdf: /pdf/185d8517426519c1325b076b5279041b66f9a5c3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper tackles the task of dynamic novel view synthesis from multiview videos and aims for the ability of reposing. It tackles the problem with a point-based rendering approach. More importantly, it does not need any pre-defined or class-specific template/skeletons and learns a per-video data-driven skeleton. The effectiveness is verified on two datasets, i.e., Robots and Blender.
Strengths: 1. The paper is well-written and easy to follow.
2. The proposed skeleton learning is quite interesting: using a data-driven point cloud from TiNeuVox; then approximating an initial skeleton via Medial Axis Transform (MAT); finally using another data-driven component to refine the skeleton from MAT.
Weaknesses: ### 1. Robustness to Initialization
a. The proposed approach heavily relies on the NeRF pretraining: 1) geometry-wise, the initial skeleton needs the pre-trained NeRF's density function to sample points, which will be later used to run MAT; 2) appearance-wise, the points' features come from the pre-trained NeRF.
b. A natural question is how robust the proposed approach is to the initialization. One way to demonstrate this could be: 1) train TiNeuVox until convergence; 2) initialize the proposed approach from different checkpoints of TiNeuVox's training; 3) draw a plot of the final performance wrt different initializations.
c. What is related is: can authors clarify what the statement "we uniformly sample the density $\sigma$" (L124) means? Does it mean that you will first compute densities on plenty of points to get a sense of min and max densities? And how will you sample the point cloud with such min/max values?
### 2. Evaluation of Human Dataset
Arguably, one of the most important articulated view synthesis scenarios is human-related rendering. I am wondering whether authors can provide quantitative and qualitative results for the proposed approach on some human-related datasets, e.g., ZJU-MoCap used in [20]. Such results could provide a more complete assessment of the proposed approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors provide a discussion on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your feedback! We answer your specific questions in this document. Please refer to the shared response for the experiment results and answers to common questions.
**Robustness to Initialization**
We conducted the proposed experiment, and we report the results in the shared response. They show that our method is robust to the quality of the initial geometry as long as functional parts are recovered.
**Clarification Density Sampling**
We will clarify that we sample the canonical density function $\Phi_d$ of TiNeuVox on a uniform coordinate grid and discard empty samples through thresholding. The grid resolution is adaptively chosen to retain approximately 10k points.
**Human-Related Rendering**
We tested our method with the ZJU-Mocap human capture dataset. Please refer to the shared section and the attached PDF for details.
**Reliance on Pre-training**
We included an additional experiment which shows that our method is robust to the choice of the backbone-specific initialization of the features and decoders. Please refer to the shared response "Impact of the Backbone Choice" for details.
---
Rebuttal Comment 1.1:
Title: Rebuttal Reply
Comment: I appreciate authors's effort in addressing my concerns.
After reading the rebuttal and other reviews, I think the newly-added experiments provide a more complete evaluation of the proposed approach and make this a solid work. Therefore, I maintain my positive attitude toward acceptance. | null | null | null | null | null | null |
Neural (Tangent Kernel) Collapse | Accept (poster) | Summary: Previous work has observed an increased alignment and emergence of an approximate block structure in a trained network's Neural Tangent Kernel (NTK) as well as the Neural Collapse (NC) phenomenon in the last hidden layer. The paper attempts to connect the two by showing that in an extreme case of perfect block structure in NTK, NC follows from the dynamics of squared error minimization on the kernel.
Strengths: The paper is extremely well-written and well organized. I did not check the details of the proofs in the appendix but main paper is correct as far I can verify. The relevant literature is well covered in the paper and even more so in the appendix. The invariance derived to establish the connection can be an interesting result on its own and its cooccurrence with NC is demonstrated in realistic experiments.
Weaknesses: While I enjoyed reading the paper, I cannot come to agree with the main premise of the work and so I'm voting for rejection. The results rely on a perfect block structure which is unlikely to be achievable in a realistic setting. Even in a restricted setting where such structure will emerge, it will likely emerge along with NC at the end of training and it is hard to argue that it is a cause or driving force behind NC. I will elaborate on these comments (W1, W2) as well as other weaknesses. Overall, a block NTK would have been interesting as a theoretical model (like unconstrained features) and the derivation of NC from this model is insightful, but the current paper is pushing this model far beyond its domain of applicability by assuming that the NTK of the whole network has this structure.
W1: Perfect block structure in NTK assumes the gradients (of an output) are perfectly orthogonal across data points from different classes. I think a necessary requirement for this would be that the data is already linearly separable at the input or the first hidden layer. Otherwise there is some inevitable interference between the classes in the first few layers where data is not linearly separable, making gradients correlated in those layers and preventing the block structure, no matter how long the network is trained. The paper demonstrates a nearly perfect block structure on the extremely simple task of MNIST and the extremely overparametrized network of ResNet20. The perfect block structure is likely far from realistic in larger tasks and smaller networks.
W2: If the block structure was a property of the NTK at initialization then one could argue for a causal relationship from the block structure to NC. The problem is, both the structure and NC are emergent properties of training dynamics and it is not clear to me why the former would be causing the latter. Previous work arguing that NTK alignment facilitates optimization focuses on alignment already present at initialization or on a partial increase in alignment within an early phase of training. It may be reasonable then to argue that the rest of the training will be faster compared to training with a misaligned NTK. The perfect block structure in this paper is not established as an early phase phenomenon.
W3: I'm surprised by the block structure in the individual hidden layer neurons in Fig 1 (b,d). While the output layer neurons are individually fit to the labels, the hidden layer neurons are updated as a whole through backpropagation. Emergent properties in this layer are then likely to be rotation invariant in general. The fact that each individual neuron is showing a pattern here could be a byproduct of the activation function (ReLU). Does the same pattern emerge with leakyrelu activations (with default parameters)? If not, the current results are limited to ReLU. (I may be mistaken here and perhaps the property is rotation invariant.)
W4: The main result assumes centralized activations and the argument is that Batch Normalization (BN) to some extent enforces this property. Are the representations in the experiments extracted right after a BN layer or right after a ReLU layer? The former would partially satisfy this assumption but the latter would in fact ensure that this is not the case. The text refers to Fig 3 in the appendix but some of the numbers in that figure are ~10. Even if they were smaller, I'm not sure how they would support this assumption since it seems to me that it is not the absolute magnitude of the mean but its magnitude relative to the overall norm of the features that matters.
W5: Previous work on NTK alignment (Baratin et al, Atanasov et al) generally shows a final alignment in the approximate range of 0.1 and 0.5 in different networks and tasks. The kernel alignments in the submission are consistently higher. I understand that different experiments in these works would simply result in different alignment values. Nevertheless the large gap suggests to me that there is a key factor (in training setup or centralizing the kernels or another aspect of the experiment) that differs between the submission and previous work which should be spelled out to prevent the reader from drawing premature conclusions across these papers.
---------
**Post-rebuttal:** I read the other reviews and responses. The rebuttals answered my comment about causality (W2) and so I raised the score. In particular, the response clarified that NTK alignment as measured in the submission is similar to previous work and that it is established as an early-time phenomenon (while NC is a late-time phenomenon). The intro should mention this before claiming a causal relationship. Also showing an example of early increase in alignment instead of the current example in Fig. 1 (e) helps with motivating the causal relationship.
Regarding W1: I agree that the theory doesn't assume a perfect block pattern. My issue is what the title of this paper implies. Unless NTK shows a near perfect block pattern like Fig. 1 (a-d) in more general cases, calling its behavior "collapse" is misleading. The revision should either provide results like Fig. 1 (a-d) in more general cases or clarify that this figure is a rare and extreme case.
Other reviewers argued that the new result has little significance beyond what what is known from unconstrained features model. I do not have the expertise to comment on this issue and leave this decision to other reviewers.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: See weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful evaluation of our work and for raising their concerns! Below are our responses.
**W1 orthogonality:** We believe that the reviewer may have misunderstood our main assumption on the NTK structure, since we do not assume that "the gradients (of an output) are perfectly orthogonal across data points from different classes". Our Assumption 3.2 states that $\Theta_{k,k}(x,x')=\langle\nabla f_k(x),\nabla f_k(x')\rangle=\gamma_n\in\mathbb R_+$ for any two elements of different classes $x$ and $x'$. The value of $\gamma_n$ is allowed to be non-zero in our assumption, and it is of course non-zero in practice for the reasons that the reviewer mentioned. We would also like to point out that the reviewer's claim that we assume "the NTK of the whole network has [block] structure" is not correct. We only assume the block structure of the NTK in the last two layers. Therefore, the kernels in the earlier layers are allowed to take arbitrary form.
**W1 linear separability:** Neither NTK alignment nor NC implies that the data is linearly separable in earlier layers. Since we only make assumptions on the kernel in the two outer-most layers, there is a lot of flexibility in the inner layers for DNNs to lift the (possibly non-linearly-separable) data into another high-dimensional space, where we can separate them.
**W1 adequacy of the block structure assumption:** While the illustrative Fig.1 only shows the NTK block structure for ResNet20 trained on MNIST, our experiments report the kernel alignment values for three architectures (ResNet, VGG and DenseNet) and three datasets (MNIST, CIFAR-10 and FashionMNIST). The alignment is high in most of our experiments, so the kernels display an approximate block structure in each setting. This shows that our assumption is realistic for the considered dataset-architecture pairs. We note that our experiments intentionally cover DNNs that display NC in previous literature. While strong NTK alignment may not occur in other DNNs (e.g., smaller or less fit to the task), there is no evidence that NC would occur in such DNNs either.
**W2 causality:** We fully agree with the reviewer that we do not show a causal relationship between NTK alignment and NC but explore the connection between the two empirical phenomena. We will make this more clear in the introduction of the revision. Specifically, we will avoid the misleading statement that NTK alignment "leads" to NC. However, there is empirical evidence that high levels of NTK alignment are achieved before the loss decreases to near-zero values (see Figure 3 in [1]), and that the kernel changes most rapidly in the early stages of training (see [1,2]). These observations justify the analysis of the dynamics with block-structured NTK before the terminal phase of training.
**W3 rotation invariance:** Since the traced kernel displayed in Fig.1b is a matrix inner product $\langle\nabla_wh(x),\nabla_wh(x')\rangle=Tr(\nabla_wh(x)\nabla_wh(x')^\top)$, it is of course rotation invariant w.r.t. $h$ (i.e., it does not change if we multiply $h(x)$ by an orthogonal matrix $A$). Since we assume $\Theta^h(x,x')=\nabla_wh(x)\nabla_wh(x')^\top = \kappa\mathbb I_n$, our assumption is also rotation invariant. We thanks the reviewer for this observation and will add a remark about rotation to the revision.
**W4 batch normalization:** We extract $h(x)$ after ReLU in the experiments and we agree with the reviewer that this setup ensures that the global features mean is not exactly equal to zero in practice. However, our experiments show that the global mean is insignificant (at least an order of magnitude smaller) in comparison with the class means displayed in the same figures, which supports our assumption. We also examine what happens if we discard the zero global mean assumption in the discussion after Theorem 5.1, point (4). In this case, the dynamics does not have to converge to perfect NC.
**W5 alignment values:** Atanasov et al [1] report the alignment values on MNIST for 2-layer MLPs, which are much less powerful architectures than we used in our paper. Previous works show that the alignment increases with depth (see Figure 5 in [3], where the alignment on MNIST reaches 0.75 for a 5-layer MLP), and this increase is associated with better performance. Thus, it is expected that our models display better alignment on MNIST. On the other hand, we report alignment around 0.5 for ResNet trained on CIFAR-10 (see Figure 8 in Appendix C), which is more consistent with Atanasov et al [1]. We believe that the numbers in Baratin et al. [4] are lower because they measure the alignment of matrices of size $NC\times NC$ created by concatenating the features $\nabla f_k(X)\in\mathbb R^{N\times P}$ and labels $\mathbf Y_k\in\mathbb R^N$ over $k=1,\dots C$, while all the other works (including ours) measure the alignment of the traced kernel $\sum_k\Theta_{k,k}(X)\in\mathbb R^{N\times N}$. We note that we used standard architectures and training procedures, which are described in detail in Section 6. We also provide the code to check and reproduce our numerical results.
**References**
[1] Atanasov et al. Neural networks as kernel learners: The silent alignment effect. (2021).
[2] Fort et al. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. (2020).
[3] Shan & Bordelon. A theory of neural tangent kernel alignment and its influence on training. (2021).
[4] Baratin et al. Implicit regularization via neural feature alignment. (2021).
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Thank you for the detailed rebuttal.
Regarding W1: In Section 2.2, is lowercase **w** the parameters of the whole network or the last two layers? The kernel defined in Eq 2 is then assumed to have a near perfect block structure in the first part of Assumption 3.2. Are the parameters in this kernel w? And is this the whole network parameters or just the last two layers?
---
Reply to Comment 1.1.1:
Comment: In Section 2.2, $\mathbf{w}$ are the parameters of the whole network. Then the NTK $\Theta_{k,k}(x,x')=\langle\nabla_{\textbf{w}} f_k(x), \nabla_{\textbf{w}} f_k(x')\rangle$ in Eq. 2 is the inner product kernel of the gradients w.r.t. all the parameters of the network. We note that this is the standard definition of the NTK in the literature.
Note that one can also write the NTK as a sum $\Theta_{k,k}(x,x’)=\sum_{\ell=1}^L \Theta_{k,k}^{\ell}(x,x’)$, where kernels $\Theta_{k,k}^\ell(x,x’):=\langle\nabla_{\textbf{w}^\ell}f_k(x), \nabla_{\textbf{w}^\ell}f_k(x')\rangle$ are the components of the NTK corresponding to parameters $\textbf{w}^\ell$ of each individual layer $\ell=1,\dots,L$. Then, as we mentioned in the rebuttal, it it clear that our Assumption 3.2 does not imply that each element of this sum has a block structure. We only assume that the whole sum has an approximate block structure (with non-diagonal blocks *not* equal to zero). Therefore, for Assumption 3.2 to hold, it is enough that only some of the summands have an approximate block structure, and all the other summands can have another structure (e.g. approximately diagonal). Therefore, our assumption does not imply that that the kernel has a block structure in the earlier layers or that the earlier layers of the network can separate the classes.
As we mentioned in the rebuttal, the approximate block structure of the NTK (as defined in Eq. 2) is also well supported by our numerical experiments. Moreover, earlier works on the NTK alignment study alignment of individual layers and confirm that in practice not all the layers align to the target function equally well [1,2].
**References**
[1] Baratin et al. Implicit regularization via neural feature alignment. (2021).
[2] Lou et all. Feature learning and signal propagation in deep neural networks. ICML (2022). | Summary: The paper proposes a mechanism behind the empirical phenomenon of Neural Collapse in deep neural networks. The paper derives and analyzes the training dynamics of DNNs with MSE loss and block-structured NTK, identifying three distinct convergence rates in the dynamics.
Strengths: The strengths of the paper include its theoretical rigor, the clarity of its presentation, and the large-scale numerical experiments that support the theory. The paper also provides a new perspective on the empirical phenomenon of Neural Collapse and identifies the conditions under which it occurs.
Weaknesses: The weaknesses of the paper include the assumption of balanced datasets and the lack of exploration of the effects of non-balanced datasets on the dynamics of DNNs with block-structured NTK. The paper also does not explore the effects of adding stochasticity to the dynamics considered in the paper.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Please dicuss the relationship of your work with this phenomenon [cite1]
[cite 1] Liu D, Wang S, Ren J, et al. Trap of feature diversity in the learning of mlps[J]. arXiv preprint arXiv:2112.00980, 2021.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations of the paper include the focus on MSE loss and block-structured NTK, which may not be applicable to other loss functions and NTK structures. The paper also does not provide practical solutions to prevent or mitigate Neural Collapse in deep neural networks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation of our work! Below are our responses to the reviewer's questions and concerns.
**Unbalanced datasets and stochasticity:** Although we do consider only balanced datasets, we believe that our analysis could in principle be generalized for unbalanced data. This would amount to considering a block-structured NTK with blocks of different sizes. Since such generalization would make the calculations more cumbersome, we propose this as one of the future work directions in our paper. We also refer adding stochasticity to the dynamics to future work. As we note in the response to Reviewer LnGq about a possible weakened kernel assumption, we believe that adding centered noise to the kernel in our model should not significantly change the dynamics of features class means. We would also like to point out that, although our theory does not include noise, the experiments certainly include some stochasticity.
**Relationship to [cite1]:** We looked into the work by Liu et al. [cite1], which studies the two-phase phenomenon in the training of MLPs, where feature diversity decreases in the first phase and then increases in the second phase. We believe that this phenomenon may be related to the dynamics of NTK alignment during training. Indeed, increasing similarity between features gradients in the first phase could mean that the NTK values first become more similar across the whole dataset, and only afterwards the block structure emerges. Thus, a possible future work direction is to study and compare the NTK structure in the first and second phases of training.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: I appreciate your feedback. I would keep my current score regarding the response. | Summary: The main contribution of the paper "Neural (Tangent Kernel) Collapse" is the connection of the Neural Tangent Kernel (NTK) alignment and Neural Collapse (NC) phenomenon in deep neural networks (DNNs). The authors assume that the empirical NTK develops a block structure aligned with the class labels. They derive the dynamics of DNNs trained with mean squared (MSE) loss and identify three different convergence rates for certain components of the error. They also identify a hyperbolic invariant that captures the essence of the dynamics and use it to prove the emergence of NC in DNNs with block-structured NTK. Also, it provides large-scale numerical experiments to support their theory. Overall, the paper provides valuable insights into the dynamics of DNNs with block-structured NTK and the emergence of NC.
Strengths: - Originality: Yes, this is the first work to connect NTK alignment and NC. Another new contribution is exploring the dynamics of DNNs with block-structured Neural Tangent Kernel (NTK).
- Quality: High quality, the paper presents a thorough analysis of the dynamics of DNNs with block-structured Neural Tangent Kernel (NTK) and the emergence of Neural Collapse (NC) phenomenon. Also, they provide large-scale numerical experiments on three common DNN architectures and three benchmark datasets to support their theory.
- Clarity: Yes, the paper is well-structured and clearly explains technical terms and concepts, making it easy to understand for readers.
- Significance: Yes, for explaining the emergence of NC, most theoretical works adopt the unconstrained feature models. However, this paper provides a new point of view of NTK to explain it. It makes a step towards realistic DNN dynamics by means of the NTK.
Weaknesses: The main weakness of the paper is that its fundamental assumption 3.2 is not justified enough. It assumes the block structure of the NTK kernel but it’s often not the case in real DNNs which is not well-trained.
Some minor remarks:
1. I think the subtitles for a) and b) in Figure 1 should be the **sum** over the classes or feature dimensions.
2. Equation 2: missing part of the parentheses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. This paper only focuses on DNNs trained with mean squared error (MSE) loss. Since Cross Entropy(CE) loss is a common choice for training classification networks, does this dynamics of DNNs and conclusions in this paper also hold for CE loss?
2. Can we get some theoretical insights from this work to improve the design and training of deep neural networks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have addressed their limitations in chapter 7. There is no negative societal impact to be expected from this work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation of our work! Below are our responses to the questions.
**Cross Entropy loss:** Generalizing our theoretical results to CE loss is challenging, since the dynamics equations with CE loss are more complex than in case of MSE even with block-structured NTK. In general, CE loss is difficult to analyze, so theoretical NC papers typically focus on unconstrained features models dynamics with MSE loss [1,2]. To the best of our knowledge, NC papers that consider CE only analyze global minimizers of the loss function and do not study dynamics equations [3,4,5]. However, we agree that it is important to verify whether our conclusions also hold for CE loss, and will provide additional empirical results for CE loss in the appendix of the revision.
**Insights for design and training of DNNs:** While not all DNNs display NC, little is known about particular factors that determine whether a given DNN would converge to NC or not. Our work may shed some light on the importance of weights decay and batch normalization for the emergence of NC, which previous literature also conjectured (see [1,6]). In particular, our assumption on the invariant (which is necessary for the emergence of NC) has effects similar to regularization (see e.g. Appendix A.1 in [7]), while the zero global mean assumption is related to batch normalization.
**References**
[1] Han et al. Neural collapse under mse loss: Proximity to and dynamics on the central path. ICLR (2022).
[2] Mixon et al. Neural collapse with unconstrained features. (2020).
[3] Lu \& Steinerberger. Neural collapse with cross-entropy loss (2020).
[4] Zhu et al. A geometric analysis of neural collapse with unconstrained features. NeurIPS (2021).
[5] Wojtowytsch et al. On the emergence of simplex symmetry in the final and penultimate layers of neural network classifiers. (2020).
[6] Ergen \& Pilanci. Revealing the structure of deep neural networks via convex duality. ICML (2021).
[7] Tirer \& Bruna. Extended unconstrained features model for exploring deep neural collapse. ICML (2022).
---
Rebuttal Comment 1.1:
Comment: The work is worth admiring. Also, the authors defend themselves well.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for their valuable feedback and the generous score! | Summary: This work provides a theoretical connection between two related phenomena in deep learning dynamics: the structural change to the empirical NTK during training, specifically its alignment with class labels; and the neural collapse, which refers to a set of behaviors exhibited by NNs trained on multiway classification. The authors asked whether the first phenomenon causes the second, and studied NN dynamics under a toy-model NTK that has a block structure.
Strengths: Both neural collapse and NTK alignment are prominent empirical features of DNN learning (at least in some cases), and the authors provided a careful treatment of a relevant toy model that connects the two. The writing is clear despite the convoluted nature of the subject (speaking as someone who has written on this specific topic before). Assumptions, claims and proven results are easy to find and understand.
Weaknesses: My primary concern is with the amount of insight that this work brings.
(1) The central question posed by the authors, in line 37, is stated as "does NTK alignment lead to neural collapse?". But I do not understand the cause-and-effect relation implied here. It has been long understood that NTK provides a dual perspective to NN training dynamics. So much as the NTK does not "lead to" the reduction of loss in NN training, NTK alignment does not "lead to" neural collapse.
(2) My second point is closely related to the first one. The assumptions about the NTK kernel and feature kernel in Sec. 2.2 are motivated by empirical observations of trained NNs and make ensuing analysis simpler. But I'm concerned that they are sufficiently strong to "guarantee" neural collapse -- these are very strong assumptions and already imply that the NN representation/dynamics are in a stage where things are "collapse-y". In other words, on a conceptual level, this work risks falling into a circular logic -- if we assume the NN is in neural collapse, we can derive that it is in neural collapse. This work does not touch on the topic of how the kind of block-like structure arises, which in my view is what "causes" neural collapse (Conceptually, I think the emergence of block-like kernels and NC are two sides of the same coin).
(3) As the authors pointed out, an important implication of NC is the generalization benefits that it brings. The current analysis, however, appears to entirely focus on the training set. This is a limitation shared by previous work on empirical NTK dynamics, though.
(4) A minor point, but I find Sec. 4.1 to be a little excessive. It is well known that if you do regression with a kernel, eigenstructure of the kernel determines how quickly different components of the target vector gets learned. There are plenty of prior work on the subject. Perhaps it is good to cite them and shorten this section.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I wonder how much the analysis can be generalized to weakened assumptions about the kernels? For example, instead of assuming that the kernel has a block-like structure, can we assume that it has the form K=K0 + Kb, where Kb is the block-like structure proposed by the authors, and K0 is the eNTK at initialization? I think this is a much more realistic assumption about the dynamics of NN learning.
I also welcome additional comments from the authors about the insights (of course, what counts as "insights" is different for different people!) that this work brings about the emergence of block structures / NC.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please see weaknesses. There are no negative societal impact. I appreciate the author discussing the weakness of making the assumptions in 3.2
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful evaluation of our work! Below are our responses to the reviewer's questions and concerns.
**Does NTK alignment cause NC?** We fully agree with the reviewer that we do not show a causal relationship between NTK alignment and NC but rather explore the connection between the two empirical phenomena. We will make this more clear in the introduction of the revision. Specifically, we will avoid the misleading statement that NTK alignment "leads" to NC. However, there is empirical evidence that high levels of NTK alignment are achieved before the loss decreases to near-zero values (see Figure 3 in [1]), and that the kernel changes most rapidly in the early stages of training (see [1,2]). These observations justify the analysis of the dynamics with block-structured NTK before the terminal phase of training. Combined with these observations, our results also suggest that we could use NTK alignment (together with the invariant identified in our paper) to ``predict'' NC before convergence.
**Are NTK alignment and NC two sides of the same coin?** Our results indicate that NC and NTK alignment are related but are not the same thing. Indeed, we show (both theoretically and empirically) that DNNs with NTK alignment do not always converge to NC. Moreover, our assumption on the dynamics invariant provides a necessary condition for the emergence of NC in DNNs with block-structured NTK (see Theorem 5.1 and the following discussion). Thus, we believe that NTK alignment is a more common phenomenon than NC, and this insight is one of the main contributions of our work.
**Does NC follow trivially from the NTK block structure?** While the connection between NTK alignment and NC may seem obvious to some readers on the intuitive level, establishing such connection theoretically is not entirely trivial and, to the best of our knowledge, it has never been done before in the literature.
In the response to Reviewer PDBS, we explain why NC does not follow trivially from the assumption on the NTK block structure. Moreover, as we mentioned above, NC does not even always occur in DNNs with block-structured NTK.
**Broad implications:** Our work proposes to study NC through the lens of NTK alignment, which opens new research directions. Previous works on NC focus on the top-down approach (layer-peeled models) and fundamentally cannot explain how NC develops through earlier layers of a DNN, what are the effects of depth, etc. On the other hand, NTK alignment literature focuses on the alignment of individual layers, and recent theoretical works even quantify the role of each hidden layer in the alignment [3]. Therefore, we believe that the connection between NTK alignment and NC established in our work provides a conceptually new method to study NC. We will include a discussion section in the revision to explore the implications of our work in more detail.
**Weakened kernel assumption:** The NTK at initialization is random (w.r.t the random initialization parameters) and has an approximately diagonal structure (see [4,5]). Therefore, a kernel of the form $\Theta = \Theta_{block} + \Theta_0$ can be modeled as a certain block-structured kernel $\Theta_{block} := a\mathbb{I} + b\mathbf Y^\top\mathbf Y + c\mathbf{1}\mathbf{1}^\top$ plus an i.i.d. noise term. Under this model, the dynamics of the features' class means $\langle h \rangle_c$ can be viewed as an approximation of the expected dynamics for each class. Thus, assuming that the noise is centered and the sample size is large enough, the noise should not change the dynamics of the class means too much. However, quantification of the randomness in such a setting may be theoretically challenging and we propose it as a future work direction in our paper.
**References**
[1] Atanasov et al. Neural networks as kernel learners: The silent alignment effect. (2021).
[2] Fort et al. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. NeurIPS (2020).
[3] Lou et all. Feature learning and signal propagation in deep neural networks. ICML (2022).
[4] Xiao et al. Disentangling direct from indirect relationships in association networks. (2022).
[5] Seleznova et al. Neural tangent kernel beyond the infinite-width limit: Effects of depth and initialization. ICML (2022).
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thanks to the authors for their careful response. The response and other proposed edits would address my concerns significantly. I am therefore raising my score.
---
Reply to Comment 1.1.1:
Comment: We are grateful to the reviewer for taking our arguments into account and increasing the score! We also greatly appreciate the reviewer's feedback, which helped us to convey the results and implications of our work more clearly. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable feedback! Based on the reviews, we were able to identify several improvements for the paper, which we will incorporate in the revision. Below we summarize the reviewers' concerns, our responses, and the proposed changes to our paper.
## Concerns about the NTK block-structure assumption
Reviewers PDBS, LnGq and MvSr expressed several concerns about Assumption 3.2 on the NTK block-structure. We summarize these concerns and our responses below:
- **Ortogonality of gradients between different classes:** Reviewer MvSr states that our assumption implies that "gradients (of an output) are perfectly orthogonal across data points from different classes". However, as we state in the response, we do not assume orthogonality between points from different classes, since $\Theta_{k,k}(x,x')=\gamma_n\geq 0$ in Assumption 3.2. Therefore, we believe that this concern comes from a misunderstanding of our main assumption.
- **Independence between gradients of different feature neurons:** Reviewer PDBS states that the assumption of the independence between different feature neurons is too strong, since it already implies strong results related to NC. Our response is two-fold: 1) We show that we can relax our Assumption 3.2 to remove the independence. Such relaxation does not change the paper's main Theorem 5.1 and requires only small adjustments of the proofs (change of variables). 2) We show that NC does not follow from Assumption 3.2.
- **NC follows trivially from the assumption:** Reviewer LnGq expressed a related concern that our assumption is "sufficiently strong to "guarantee" NC". However, our results indicate that NTK block structure does not guarantee convergence to NC, and our invariant assumption provides a necessary condition for the emergence of NC in DNNs with block-structured NTK. We also detail why NC does not follow from the assumption in the response to Reviewer PDBS.
- **Adequacy of the assumption:** Reviewer MvSr expressed a concern that block structure of the NTK is not realistic and implies linear separability of data in the earlier layers. In the response, we explain that linear separability in earlier layers does not follow from our assumption. Moreover, our numerical results and previous works on NTK alignment provide a large body of empirical evidence to justify our assumption. We believe that this concern may come from a misunderstanding of our assumption and its implications.
We propose the following changes in the revision of our paper to address the reviewers' comments and concerns:
- Based on the comments by Reviewer PDBS, we will adopt a relaxed version of Assumption 3.2 without independence of different features neurons. This improvement requires minimal changes in the paper's reasoning and proofs.
- Based on the comments by Reviewer LnGq, we will modify the introduction to make it clear that NTK alignment and NC are not the same phenomenon and do not always occur together.
Overall, we admit that our main assumption is a simplification of the realistic DNNs’ behavior. However, empirical evidence (including our numerical experiments) shows that our assumption can approximate well-trained realistic DNNs. Moreover, much like our assumption, perfect NC is also a simplification that never holds exactly in realistic DNNs.
## Concerns about significance
Reviewers LnGq and MvSr expressed concerns about significance and implications of our work. We summarize these concerns and our responses below:
- **Causality:** Reviewers LnGq and MvSr share a concern about the causality between NTK alignment and NC. We agree with the reviewers that we do not show a causal relationship between NTK alignment and NC but rather explore the connection between the two empirical phenomena. However, we also reference empirical evidence that high levels of NTK alignment are achieved before the loss decreases to near-zero values, which justifies the analysis of the dynamics with block-structured NTK before the terminal phase of training.
- **Insights:** The primary concern of Reviewer LnGq is the amount of insight that our work brings. We respond that our results are 1) novel, 2) non-trivial, 3) open new research directions. Therefore, we believe that they are valuable for the community.
We propose the following changes in the revision of our paper to address the reviewers' comments and concerns:
- We will modify the introduction to make it clear that we do not show causality between NTK alignment and NC. Specifically, we will avoid the misleading statement that NTK alignment "leads" to NC.
- We will include a new section to discuss implications and broad impact of our work.
- To address the question of Reviewer Hrgu about generalization of our results to CE loss, we will add numerical experiments with CE loss in the appendix of the revision.
Overall, our paper provides the first (to the best of our knowledge) theoretical connection between NTK alignment and NC. Since both phenomena are 1) of high interest for the ML community and 2) hard to study theoretically, we believe that our results would be interesting for a wide audience.
## Conclusion
We believe that the proposed adjustments cover all the major concerns about the paper's technical contribution raised by the reviewers. These adjustments also do not require major changes of the paper's content and proofs. Moreover, while most of the reviewers agree that our results are novel and interesting, we hope that our response made the significance of our work even more clear. We are optimistic that the reviewers will consider our arguments and be open to a partial reevaluation of their scores. We will also gladly respond to any additional questions that may arise during the discussion period! | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper connects NTK (neural tangent kernel) and NC (neural collapse) by assuming NTK has a block structure in the late training stage, meaning the kernel for samples within the same class is much larger than samples from different classes. The technical assumption additionally assumes that the gradients of classification heads are independent of each other, and the gradients of neurons of the penultimate layer are also independent of each other. The authors then claim that the kernel gradient descent of this NTK leads to NC. Empirical results provide the correlation between the block structure and NC.
Strengths: The paper is the first paper I have seen that builds the connection between NTK and NC. It is inspiring in concept with good novelty.
The presentation of the paper is overall good and easy to read.
Weaknesses: I believe the assumption of the paper is too strong because the authors not only assume the block structure of NTK but also assume the independence between gradients of neurons/classification heads for any input. Taking a closer look at Assumption 3.2, one can already derive strong results. For example, one can wrote $\nabla f_k(x)$ as a function of $\nabla h_s(x)$, $\mathbf W$ and $h_s(x)$ by chain rule, and plug into $\Theta$. Then it is not hard to find that
- $\langle \mathbf W_k, \mathbf W_s\rangle$ is identical for any $k\neq s$;
- $\langle \mathbf W_k, \mathbf W_k-\mathbf W_s\rangle=\gamma_d/\kappa_d=\gamma_c/\kappa_c=\gamma_n/\kappa_n$;
- $\langle h(x), h(x')\rangle =-\kappa \langle \mathbf W_k, \mathbf W_s\rangle$ ($\kappa$ is $\kappa_d, \kappa_c$ or $\kappa_n$ depends on the relationship between $x$ and $x'$).
Those are some implications I derived within an hour. There might be some flaws but the message is clear --- the independence between gradients is a very very strong assumption.
Another thing I want to point out in the proof is that the training may not be in the kernel regime. The authors assume after hundreds of training epochs, the empirical NTK satisfied the property, but it is not clear whether it changes afterward (meaning it is not in the kernel regime but feature learning regime, which does not support linearization of the model).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Can you verify the calculation I did above?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I do not see any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for rising the concern about the assumption of the independence between output/feature neurons, i.e., the part of Assumption 3.2 stating that $\Theta_{k,k'}(x,x')=\Theta^h_{k,k'}(x,x')=0$ for any $k\neq k'$ and any $x,x'$. While the calculations in the review do not appear fully correct (we provide details below), we agree that the independence assumption is quite strong. In fact, as we show below, we can get rid of the independence part of the assumption with minimal changes to the paper.
**Relaxation of the assumption on $\Theta^h$:** Based on the review, we identified that we can relax our main assumption without significant changes to the paper in the following way:
$$\Theta^h_{k,k'}(x,x')=\beta\cdot\Theta^h_{k,k}(x,x')\quad \forall k=1,\dots,n,\forall k'\neq k $$
for some $0\leq\beta<1$. Note that our original assumption is the special case of this assumption with $\beta=0$. Instead of requiring independence between different features, the relaxed assumption only states that the dependence between different features is weak, which is indicated by $\beta$.
This relaxation amounts to a simple change in the training dynamics of DNNs with block-structured NTK. Indeed, in the proof of Theorem 4.1 (Appendix B.1), we obtain the following dynamics for $h_s(x_i)$:
$$ \dot h_s(x_i)=-\sum_{s'=1}^n\sum_{i'=1}^N[ \mathbf W\mathbf H- \mathbf b\mathbf 1_N^\top-\mathbf Y ]\_{s'i'}\Theta^h_{s',s} (x_i,x_{i'})$$
Vectorizing each term of the outer sum, we get the dynamics of the whole features matrix:
$$\dot{\mathbf H}=-\mathbf A[\mathbf W^\top(\mathbf W\mathbf H+\mathbf b\mathbf 1_N^\top-\mathbf Y)][(\kappa_d-\kappa_c)\mathbb I_N+(\kappa_c-\kappa_d)\mathbf Y^\top\mathbf Y + \kappa_n\mathbf 1_N\mathbf 1_N^\top],$$
where $$\mathbf{A}:=(1-\beta)\mathbb I_n + \beta\mathbf 1_n\mathbf 1_n^\top.$$
Thus, adding the dependence between the feature neurons amounts to "scaling" the dynamics of $\mathbf H$ by an invertible matrix $\mathbf A$. The dynamics of $\mathbf W$ and $\mathbf b$ remain unchanged.
We notice that such scaling does not effect our proof of variability collapse (NC1). Indeed, applying a change of variables $\tilde{\mathbf E}:=\sqrt{\mathbf A}{\mathbf E}\sqrt{\mathbf A}$ and $\tilde{\mathbf H}_2:=\sqrt{\mathbf A^{-1}}\mathbf H_2$, we can carry on the same proof and show that $\tilde{\mathbf H}_2\to\mathbb O$ and thus $\mathbf H_2\to\mathbb O$. Similarly, we can apply a change of variables $\tilde{\mathbf W}:=\mathbf W\sqrt{\mathbf A}$ and $\tilde{\mathbf H}_1:=\sqrt{\mathbf A^{-1}}\mathbf H_1$ in the proofs of NC2-4 to show the duality $\tilde{\mathbf H}_1\propto\tilde{\mathbf W}^\top$. From this duality and the assumptions of Theorem 5.1, we get the ETF structure of the features matrix and the duality $\mathbf H_1\propto\mathbf W^\top$. Thus, the statement of the main Theorem 5.1 remains unchanged under the relaxed assumption.
We will adopt the relaxed version of the assumption in the revision of our paper.
**Relaxation of the assumption on $\Theta$:** It is also possible to relax the assumption on the NTK $\Theta$ in the same way:
$$\Theta_{k,k'}(x,x')=\beta'\cdot\Theta_{k,k}(x,x')\quad\forall k, \forall k'\neq k$$
for some $0\leq\beta'<1$. This relaxation does not effect the dynamics derived in Theorem 4.1, which is the main target of our analysis. The assumption on $\Theta$, in fact, only effects the convergence analysis in Section 4.1. The argument about the convergence rates still holds with the relaxed assumption, since the eigenvectors of the non-diagonal terms of the NTK $\Theta_{k,k'}$ are the same as of the diagonal terms $\Theta_{k,k}$.
**Implications of the assumption:** Following the reviewer's reasoning and writing $\nabla f_k(x)$ by chain rule as a function of $\nabla h(x)$, $\mathbf W$ and $h(x)$, we can derive the following two implications of our relaxed assumption:
\begin{align}
\langle\mathbf W_k,\mathbf W_{k'}\rangle&=-\beta\sum_{s\neq s'}\mathbf W_{ks}\mathbf W_{k's'}+\dfrac{\Theta_{k,k'}(x,x')}{\Theta^h_{s,s}(x,x')}\quad\forall k\neq k',\\\\
\langle\mathbf W_k,\mathbf W_{k}\rangle &=-\beta\sum_{s\neq s'}\mathbf W_{ks}\mathbf W_{ks'}+\dfrac{\Theta_{k,k}(x,x')}{\Theta^h_{s,s}(x,x')}+\dfrac{\langle h(x), h(x') \rangle+1}{\Theta^h_{s,s}(x,x')}
\end{align}
As the reviewer correctly noted, this implies $\langle\mathbf W_k,\mathbf W_{k'}\rangle=const$ for any $k\neq k'$ in the special case of $\beta=0$ (but not if $\beta>0$). The two remaining conclusions in the review do not seem to hold even in case $\beta=0$. In particular, as far as we can see, neither the ETF structure of the class means (NC2), nor the duality between the weights and the features matrix (NC3) follow from these equations. Moreover, even variability collapse (NC1) does not in general follow from the assumption.
A trivial counterexample where $\mathbf H$ satisfies the above conditions but NC1-2 do not hold is given by the following configuration of the feature vectors: $h(x_1^{c_1})=(1,0,0,0)$, $h(x_2^{c_1})=(1/\sqrt{2},1/\sqrt{2},0)$, $h(x_1^{c_2}) = (0,0,0,1)$, $h(x_2^{c_2}) = (0,0,1/\sqrt{2},1/\sqrt{2})$.
**Kernel regime:** There is empirical evidence that high levels of NTK alignment are achieved before the loss decreases to near-zero values (see Figure 3 in [1]), and that the kernel changes most rapidly in the early stages of training (see [1,2]). These observations justify the analysis of the dynamics with block-structured NTK before the terminal phase of training. Nevertheless, we of course do not claim that the NTK of real DNNs remains completely constant even in the end of training. Generalizing the analysis to non-constant NTK is an interesting but challenging future work direction.
**References**
[1] Atanasov et al. Neural networks as kernel learners: The silent alignment effect. (2021).
[2] Fort et al. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. NeurIPS (2020).
---
Rebuttal Comment 1.1:
Title: Comments on the implication and relaxation
Comment: Thanks to the authors for the clarification! My quick calculation indeed had a bug, so my second and third implications are false. I also verify the two implications derived by the authors. However, if we take a closer look at the correct implications, we can find that, without the newly proposed relaxation,
- $\langle W_k,W_{k'}\rangle=0$.
- $\langle W_k, W_k\rangle=\gamma/\kappa+(\langle h(x), h(x')\rangle +1)/\kappa$ where $\gamma,\kappa$ depends on the relation between $x,x'$.
They are very strong assumptions. Even with the relaxation, given the relation between $x$ and $x'$, $\langle h(x), h(x')\rangle$ is still a fixed value.
Overall, I think the assumption (even with relaxation) is strong because of too much symmetry, which is convenient for proving NC.
On the other hand, although I understand the changes in the proof might be straightforward when incorporating the relaxation, it is very hard for me to verify it confidently without reading it with all the changes added.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking the time to verify our calculations and getting back to us! We would like to highlight the following points that we made in the rebuttal regarding our assumption:
- The NTK block structure assumption alone does not imply NC. Moreover, none of the four NC components (denoted NC1-4 in the paper) follows from the assumption alone. We show this 1) in the discussion after the main Theorem 5.1, and 2) in the counterexample on $\mathbf{H}$ in the rebuttal ("Implications of the assumption"). Therefore, even perfect NTK block structure does not guarantee NC, and our work provides necessary conditions for the emergence of NC in DNNs with NTK alignment. These necessary conditions cannot be derived directly from the assumption and require analysis of the dynamics presented in the paper.
- The NTK block structure assumption is supported by a large body of empirical evidence (including our numerical experiments and previous works on NTK alignment). Hence, we believe that our assumption is a justified simplification of realistic DNNs' behaviour.
Therefore, while our assumption certainly makes the analysis of the DNNs dynamics much simpler, our results are still 1) non-trivial, 2) approximate behaviour of realistic DNNs, 3) provide new insights into NC (see also the response to Reviewer LnGq regarding the broad implications of our results). Moreover, since our results are novel and connect two empirical phenomena widely studied by separate sections of ML community, we believe that our work may be interesting and potentially insightful for a wide audience. | null | null | null | null | null | null |
Text-to-Image Diffusion Models are Zero Shot Classifiers | Accept (spotlight) | Summary: This paper uses the text-to-image diffusion models as zero-shot classifiers. It proposes to compute a subset of the full scores matrix to be more efficient. It proves Imagen and Stable Diffusion have good zero-shot performance and are robust to misleading textural cues.
Strengths: 1. It is novel to use pre-trained diffusion models as zero-shot classifiers.
2. Building a scores matrix with each label prompt is reasonable for me.
3. The efficiency is very important in this setting. The author well discussed this point in section 3.1.
4. It is interesting to see the generalization ability of large pretrained model leads to robustness to shape-texture conflicting cues.
5. Empirical results are sufficient to support the conclusion.
Weaknesses: 1. It lacks some theoretical discussion about why diffusion models can perform well in classification.
2. Classification is a simple task. I am not sure whether it is too easy for a large model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is it possible to extent this work to various downstream tasks such as detection or segmentation?
2. In large datasets like ImageNet, there are many kinds of sub-classes such as many types of foxes. In this case, is it reasonable to use the diffusion model since they are very similar in noisy images?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to "Questions".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful comments and suggestions. We address your questions and concerns below:
- **Theoretical discussion**: Generally, the idea of using a generative model as a classifier is a fairly old and well-studied idea (e.g. “On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes” NeurIPS 2001). Our main contribution is in showing how to efficiently use a diffusion model this way and presenting empirical results comparing them with contrastive zero-shot models.
- **Classification is a simple task**: We think image classification tasks certainly can be very challenging; for example CLIP gets around 50% on cue-conflict imagenet and does no better than chance in our attribute binding experiments.
- **Q1 Extension to other downstream tasks**: There have been several recent works that have used pre-trained text-image diffusion models (either as zero-shot or through their representations) for segmentation and detection e.g. https://arxiv.org/abs/2303.04803, https://arxiv.org/abs/2211.13224, https://arxiv.org/abs/2303.02153 etc. However, the method proposed in the paper would not work directly for many downstream tasks because it scores images based on the text prompt rather than predicting some structured output.
- **Q2 Fine-grained image classification**: We agree with the intuition that distinguishing sub-classes would be hard when there is lots of noise. However, our method uses many noise levels and we expect the scores at low noise-levels will be more discriminative for distinguishing similar classes. Indeed, our heuristic weighting function puts more weight on lower noise levels. | Summary: The paper presents a new method that utilizes generative models as image classifiers and initiates explorations of notable, open-source models using this approach. Beginning with recognized image datasets, the researchers examine the models and evaluate the scores they have achieved. Several experiments are conducted to profile the behavior of these models, such as determining their optimal operating resolutions and dataset types. The paper assesses these models' capabilities by evaluating dataset competencies, such as pointing out the prominence of the Imagen model with the MNIST dataset due to its strong text generation skills.
Further characterizations are made on the models' ability to handle shape-texture conflicting cues using the Cue-Conflict dataset. The paper suggests that generative models, utilized through their proposed method for recognition tasks, exhibit more robustness compared to traditional ConvNets.
Lastly, the paper explores how these models perform in attribute binding tasks. The researchers report that while the CLIP model's performance is near random, both the Stable Diffusion (SD) and Imagen models show promising potential in these tasks.
Strengths: The strengths of this paper shine through in several ways. Firstly, the paper explains the research and results in a way that's easy to understand. This clear writing helps to make complex ideas accessible to a wider audience.
The paper's thorough exploration of generative models also sets this work apart. They dive deep into understanding how these models behave. This includes looking at how well the models can handle images where shapes and textures don't match, as well as how they can connect attributes together. These insights are not just interesting, but they also add to our understanding of how these models work.
Another key strength of this paper is the smart improvements the paper makes. For instance, they use a technique called timestep weighting to improve how the models classify images. This technique is a smart way to reduce the impact of noise, making the models more reliable as they handle larger timesteps if they are sampled. Such improvements amplify the significance of the paper.
Weaknesses: # Pre-rebuttal
While the paper continually emphasizes the intention to introduce a method for using generative models as classifiers, there's a significant overlap with the methodology presented in "Your Diffusion Model is Secretly a Zero-Shot Classifier", which also uses the Stable Diffusion generative model as a classifier. Despite the similar scoring mechanisms, they attribute their improved results to the application of timestep weighting and a more efficient class pruning method. This does not, however, fully offset the lack of originality due to the similarity of their method with the cited work. Furthermore, by choosing not to pursue enhancements such as prompt engineering, the authors have seemingly missed an opportunity to establish a stronger benchmark for future studies. This decision, coupled with the questions regarding the paper's originality, could limit the overall significance and potential impact of their work.
Certain parameters used in the paper, such as the cutoff_pval, are not adequately examined for their effects, making them appear as arbitrary choices or 'magic numbers'. Similarly, the weighting function used remains largely unexplored and unexplained, leaving a gap in understanding its impact on the results.
The paper provides comparisons for their efficiency improvements, including shared noise and class pruning methods. However, the paper falls short in determining the peak accuracy reach for the vanilla method, leaving an element of uncertainty and a lack of thorough comparison between the methods employed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. L271: few-shot classification?
2. Why not use more commonly used zero-shot (or few-shot) learning datasets such as CUB, AwA or miniImageNet?
3. Why not compare against strong (even if not sota) ZSL baselines with various benchmarks?
4. How do you tune the hyper-parameters? Setting any HP wrt. test performance is a violation of ZSL protocol. Can you describe a reproducible HP tuning procedure?
5. Regarding the compositional generalization experiments: why not use existing compositional zsl benchmarks and compare against existing methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Most limitations are listed above. No additional (especially societal) limitation to be reported here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful comments and suggestions. We address your questions and concerns below:
- **Overlap with another paper**: As we state in the conclusion, “Your Diffusion Model is Secretly a Zero-Shot Classifier” is concurrent work. It was released on arxiv (but not in a peer-reviewed venue) around a month before NeurIPS submission deadline. In fact, our method and results were submitted and presented in workshops from the end of January 2023 and were thus conducted before the other work (to preserve the sanctity of double blind reviewing, we can not provide exact details on this). It is our understanding that the existence of a recent non-peer-reviewed pre-print that is similar to the work under submission should not affect novelty/originality judgements.
- **Not pursuing enhancements such as prompt engineering**: We actually extensively explored different prompts, but found none to significantly outperform simple prompts like “a photo of a __.” (e.g. see lines 207-209).
- **Weighting function unexplored**: We did explore the weighting function in detail, and included those results and discussion in the supplementary section due to space constraints (see Appendix B and its reference in the main text in Line 127).
- **Hyperparameter selection**: The choice of *min_scores, max_scores*, and *cutoff_pval* are simply reasonable choices that keep the method efficient to run, not crucial “magic numbers.” Different choices of these parameter values will make the method use a different amount of compute (with less compute leading to noisier estimates of the best-scoring classes). However, different choices of these values do not change the actual behavior of the model in expectation because the method is always estimating the same argmin (eq 2). Furthermore, we selected these values and weighting function using CIFAR-100 (see section 3, Lines 116-127) but not on any of the many other datasets we experimented on. We have been careful to preserve true ZS classification protocol in our experiments.
- **Peak accuracy for vanilla method**: The “efficient” and “vanilla” methods are different ways of estimating exactly the same argmin (eq 2 in the paper), so they have the same peak accuracy given enough compute. However, the methods do have different efficiencies, which is shown in Figure 2.
- **Q1 L271**: Good catch! That should be “zero-shot” classification.
- **Q2 Other datasets**: CUB, AwA, and miniImageNet are mostly used for few-shot classification, which we don’t explore in our paper. Instead, we use datasets from the CLIP paper, which have become pretty standard for evaluating zero-shot classification (e.g. used to evaluate ALIGN, BASIC, and LiT)
- **Q3 Strong baselines**: We think CLIP is a pretty strong baseline for zero-shot classification. While there exist stronger models, many are either not open source (e.g. CoCa) or are pre-trained on classification-like data (e.g. LiT). Additionally, most subsequent zeros-shot methods are based on CLIP. We are not sure which other baseline the reviewer had in mind.
- **Q4 Hyperparameters**: See our “hyperparameter selection” bullet point above. Briefly, we selected the hyperparameter values and weighting function using CIFAR-100 (see section 3, Lines 116-127), but not on any of the many other datasets we experiment on.
- **Q5 Why not use an existing benchmark**: We are using an existing benchmark from Lewis et al. (https://arxiv.org/abs/2212.10537).
---
Rebuttal Comment 1.1:
Comment: I’d like to thank the authors for their detailed response.
* The authors are right that “Your Diffusion Model is Secretly a Zero-Shot Classifier” is indeed arxiv-only and apparently quite recent (1.5 months before NeurIPS deadline), so it can be ignored following the NeurIPS’23 policy. It is an unintentional mistake on my side. However, as a suggestion, I believe it will be beneficial for everyone if the paper acknowledges the arxiv paper as concurrent recent work and discuss the (dis)similarities briefly.
* Prompts: thanks, please add a brief summary on your explorations on prompt enhancements in the paper.
* HPs: thanks for the clarification. I find the explanation given the paper a bit cryptic. The answer in the rebuttal is much more clear. I’d strongly suggest to revise the explanation in the paper to improve clarity.
* Datasets: traditional ZSL papers (without web-scale training and more “clean” protocols) actually predominantly do use CUB & AwA (but not miniImageNet indeed), I disagree with the response, but existing evaluations are acceptably strong.
* Other points: thanks for the clarifications & pointers, they’re sufficient & clear.
While some (minor) revisions based on rebuttal will be pending for the camera-ready version, following the important clarifications I have increased my rating to ‘weak accept’.
---
Reply to Comment 1.1.1:
Comment: Thank you for the comments and feedback. We will make the requested clarifications in the revised paper. | Summary: This study explores the potential of text-to-image diffusion models as zero-shot classifiers. The models show competitive performance with CLIP on zero-shot image classification datasets and excel in shape/texture bias tests and attribute binding. The findings suggest that generative pre-training should be considered as a compelling alternative for vision-language tasks.
Strengths: 1. The authors thoroughly investigate the zero-shot classification capabilities of text-to-image diffusion models through extensive experiments, covering both standard and challenging benchmarks.
2. Additionally, the authors compare the classification performance of Stable Diffusion (SD) and Imagen, revealing intriguing results that suggest diffusion models trained on original images exhibit superior generalization compared to those trained on the latent space of VAE.
3. The authors offer a range of practical and effective techniques to accelerate the classification process, providing valuable insights for improved efficiency.
Weaknesses: 1. While the authors present various strategies to speed up the classification process, it remains significantly slower compared to traditional classification models.
2. It would be beneficial to include a comparison with other multimodal models like BLIP.
3. There appears to be a disconnect between lines 192 and 193, as the introduction of Imagen is only mentioned in that paragraph.
4. Considering that CLIP is trained with 400M text-image pairs and SD is trained on Laion-5B, a direct comparison may not be entirely fair. It would be interesting to see the performance gap between SD and CLIP when both models are trained on the same amount of data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful comments and suggestions. We address your questions and concerns below:
- **Runtime**: As we say in the paper, the method does not produce a very practical classifier. However, we believe it still has a lot of value for illuminating what kinds of visual knowledge diffusion models learn, comparing diffusion model abilities on fine-grained tasks (such as attribute binding), and more generally comparing generative vs. contrastive pre-training.
- **Comparison with other multimodal models**: We focused on CLIP as it is the most widely used zero-shot classifier and most subsequent methods (including BLIP) use a similar contrastive training method. As far as we know, BLIP is not often used as a zero-shot image classifier (although of course it is used for other tasks like captioning).
- **Lines 192,193**: Sorry for the confusion: that sentence was meant to indicate which version of Stable Diffusion we are using, not suggest we are only using Stable Diffusion. We will rephrase it to avoid confusion.
- **Different training datasets**: We completely agree, and hope in the future the community will release more comparable models. Please see the general response above for more discussion and our explanation on model comparisons. We also note that the dataset comparison between Stable Diffusion and CLIP isn’t entirely clear because (1) CLIP was trained for 32 epochs, while SD was trained for less than 1, and (2) SD was largely trained on an aesthetic subset of LAION-2B, which is substantially smaller than the full LAION-5B dataset. | Summary: The paper inverts pre-trained text-to-image diffusion models by using bayes rule, and evaluates them over a variety of benchmarks. For the evaluation they use two diffusion models: Stable Diffusion and Imagen. They compare these models against CLIP-L/14. They show a variety of benchmarks where the diffusion model does better at classification than a standard SOTA discriminative model such as CLIP-L/14.
Strengths: i) The paper does a very dense evaluation of their proposed method
ii) They give good analysis on why/where generative classifiers would be useful over their discriminative counterpart.
iii) The paper proposes weighted timesteps and sampling methods that improve the accuracy and speed of the classifier.
Weaknesses: i) the analysis become a bit weak , as none of the models considered are trained on the same dataset.
ii) Ablations are in the supplementary and results are ablated only on one dataset, would be good to have atleast 2-3 diverse datasets (small/high resolution)
I have few qs that i have listed below.
minor -
Line 271 shouldn't it be zero-shot instead of few-shot?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: i) In Table 4 weighting seems to help significantly help Imagen however not so much for SD? why is that? what if u learned weighting for SD seperately and tried to generalize it to different datasets?
ii) Are any of the models trained on the same/similarish datasets (ViT22B vs Imagen)?
It would be good to clarify this in the paper.
iii) The paper raises resolution mismatch of the generative model (specifically Imagen), a big concern. Do the authors expect future generative models that are explicitly trained for classification to resolve this issue? If so how? as SD does train on higher resolution but doesn't get better performance than CLIP
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful comments and suggestions. We address your questions and concerns below:
- **Different training datasets**: As we mentioned in the general response above, in this work, we focused on studying the capabilities of existing powerful models because training such models from scratch would require huge compute resources. However, many of the differences between models are striking enough that we are confident they point to fundamental differences in what is learned from their training objectives rather than differences in their training data. For example, diffusion models greatly outperform CLIP on robustness to shape-texture conflicts and attribute binding, and pixel-based diffusion substantially outperforms latent-space based diffusion models on OCR data like MNIST and SVHN (see Lines 231-236).
- **Ablations**: As we state in line 536 of the supplementary materials, we did find similar results to hold across many datasets. We used our results on Caltech 101 as a representative example to support our findings, but we are happy to include numbers on other datasets in the revised version of the paper.
- **Line 271**: Good catch – yes, that should say zero-shot.
- **Q1 Learned weighting**: We did try learning the weighting for SD separately on Caltech101 (last row of Table 4 in Appendix B). We did not try applying the learned weights on other datasets because it barely improved over the heuristic weighting while being more complicated. We also found it interesting that SD is more robust to the choice of weighting scheme. Mechanistically, the reason is essentially that (1) “Simple” and “VDM” weighting put more weight on earlier timesteps than “Heuristic” and (2) Imagen tends to be an inaccurate classifier at very small noise levels. We intuitively believe this is a consequence of pixel vs latent diffusion.
- **Q2 Models trained on similar datasets**: Imagen and Stable Diffusion are perhaps most similar in that both are trained on LAION (although Imagen uses some additional data). ViT22B is trained on JFT, which is less similar because it is a (semi-automatically labeled) classification dataset. We discuss training data for ViT on line 259 and for Imagen, SD, and CLIP in section A of the supplementary materials.
- **Q3 Resolution mismatch**: We expect non-cascaded diffusion models such as latent diffusion models or simple diffusion (https://arxiv.org/abs/2301.11093) to avoid the resolution mismatches. We also think explicitly fine-tuning generative models for classification would improve results and would be an interesting future direction of research. While CLIP outperforms SD on most classification tasks, SD is better at attribute binding and the cue-conflict dataset, suggesting that the different pre-training methods may have different areas of strength rather than one being strictly better than the other. Intuitively, it makes sense that CLIP is better at standard image classification, as it was designed with transfer to classification tasks in mind while SD and Imagen were designed as image generators.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
I'm wondering if the authors could comment or compare against LiT (https://openaccess.thecvf.com/content/CVPR2022/html/Zhai_LiT_Zero-Shot_Transfer_With_Locked-Image_Text_Tuning_CVPR_2022_paper.html)
LiT uses a pre-trained language model to fine-tune discriminative image models. As one could potentially argue that the improvement in performance is primarily due to the use of Language model in Imagen that other discriminative models don't use.
---
Reply to Comment 1.1.1:
Title: Use of pre-trained LM and comparison with LiT
Comment: Stable Diffusion uses a frozen CLIP text encoder rather than a language model and exhibits similar behavior to Imagen (decent at zero-shot classification, excellent on cue-conflicted imagenet, better than chance on attribute binding), even though results are generally a bit worse. We therefore think CLIP vs. Stable Diffusion already is a good comparison for evaluating differences in image pre-training methods while keeping the text encoder identical. Along similar lines, [this paper](https://arxiv.org/pdf/2303.09769.pdf) shows that unconditional diffusion pre-training (without any text component) performs well when transferred to downstream tasks, offering additional evidence that the diffusion training is useful for representation learning aside from the choice of text model.
We do agree that the improvement of Imagen over Stable Diffusion could be due to the more powerful text encoder, but we think LiT may not be a good comparison for looking into this more. The issue is that the LiT image encoder is pre-trained on JFT-3B, a semi-automatically labeled fine-grained image classification dataset, and then frozen. Since LiT's image encoder essentially sees classification data during pre-training, we don’t think using it is truly zero-shot, so the comparison would not be direct. | Rebuttal 1:
Rebuttal: ### **General response to all reviewers**
We thank all the reviewers for their helpful comments and suggestions.
Generally, reviewers (*LkNq* and *yeML*) had questions around the fairness of comparison between models trained on different datasets (such as CLIP and Stable Diffusion). In this work, we focused on studying the capabilities of existing powerful models because training such models from scratch would require huge compute resources. However, many of the differences between models are striking enough that we are confident they point to fundamental differences in what is learned from their training objectives rather than differences in their training data. For example, diffusion models greatly outperform CLIP on robustness to shape-texture conflicts and attribute binding, and pixel-based diffusion substantially outperforms latent-space based diffusion models on OCR data like MNIST and SVHN (see Lines 231-236).
Secondly, reviewers (*LkNq* and *13fF*) had questions regarding ablation studies for the weighting function. We studied the effects of different weighting functions in detail and have included this discussion in Appendix B.
Reviewers (*yeML* and *13fF*) also had questions regarding comparisons to other baselines. In our work, we focused on CLIP as it is the most widely used zero-shot classifier and most subsequent methods incorporate similar contrastive training methods. While stronger models than CLIP do exist, many are either not open source (e.g. CoCa) or are pre-trained on classification-like data (e.g. LiT). For all our experiments, we have compared to existing benchmarks like datasets used in the CLIP paper for image classification, benchmark proposed by Geirhos et.al 2021 for robustness to shape-texture bias and the benchmark proposed by Lewis et.al 2023 for attribute binding.
We address the further concerns and questions of each reviewer in detail below. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Score-based Grasping Primitive for Human-assisting Dexterous Grasping | Accept (poster) | Summary: This paper introduced a novel and challenging task that performs dexterous grasping according to human wrist movements. This task is potentially useful for applications with prosthetic hands.
The paper further proposed a novel two-stage framework that solves the two challenging aspects of the proposed task and demonstrated strong performance in both simulated and real-world experiments.
Strengths: The proposed task is novel and potentially helpful to social welfare. The proposed framework is intuitive and is properly designed for the challenges of its task. The authors have also conducted extensive experiments to show the capacities of the proposed method.
The paper is well-structured and written.
Weaknesses: From the qualitative results in the supplementary video, I noticed that for most objects, the graspings are from the same angle relative to the object. For example, with the chips can, all demonstrated graspings are from the side of the cylinder regardless of how the can is placed. This makes me wonder if the proposed method can truly adapt to different approach angles and different *user intentions*. Please correct me if I missed anything from the videos.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. For the baseline methods, are all baselines re-trained to take the wrist pose as a condition? I didn’t find this piece of information in the paper.
2. In figure 5, why do w/o $a^r$ and w/o $a^s$ success rates drop as more samples are seen after 5e6 samples?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: 1. The method assumes full point cloud observation which may limit its application in the real world.
2. The qualitative results did not show how the proposed method adapts to different wrist poses relative to the object.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: From the qualitative results in the supplementary video, I noticed that for most objects, the graspings are from the same angle relative to the object. For example, with the chips can, all demonstrated graspings are from the side of the cylinder regardless of how the can is placed. This makes me wonder if the proposed method can truly adapt to different approach angles and different user intentions. Please correct me if I missed anything from the videos.**:
Thank you for bringing up this point. While the approach angles appear similar in the videos, it's important to note that the object's relative pose to the wrist is actually varied due to changing object poses. Principally, we have a world coordinate, and a wrist coordinate. The graspings looks like same angle relative to the object is because the movement of hand is similar in world coordinate. However, since the object pose in world coordinate is also changing (for instance, the Bleach Cleanser being positioned both vertically and horizontally), the object's pose in the wrist coordinate is consequently changing, resulting in different relative poses. Regarding the Chips Can, despite all demonstrations involving side grasps, it's essential to recognize that the object's relative pose to the wrist is in fact distinct.
To further illustrate the adaptability of our method to various approach angles and user intentions, we have conducted two additional experiments. As detalied in **Section 1 of the anonymous project page**, participants first move the hand to either the middle or the right side of the object, then start to approach. When dealing with chips, participants attempt to grasp the top of the chip, while for the mug, participants attempt to grasp the mug's edge. Alongside these real-world experiments, we have also conducted more comprehensive simulation experiments, as detailed in the [Q2 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8), to effectively demonstrate the adaptability of our method.
> **Q2: For the baseline methods, are all baselines re-trained to take the wrist pose as a condition? I didn’t find this piece of information in the paper.**:
Thank you for pointing this out. To clarify, all baselines are re-trained by taking latest 5 frame wrist poses as input under human-assisting setting (e.g., the agent can not move the wrist), which aligns with "Ours" experiment setup. We will add additional detail of this in the future revision.
> **Q3: In figure 5, why w/o $a^r$ and w/o $a^s$ success rates drop as more samples are seen after 5e6 samples?**
Thank you for raising this concern. We hypothesis that this issue is tied to the fundamental exploration and exploitation challenge in classical reinforcement learning. As the primitive policy provides a strong guidance for the reinforcement learning algorithm, the agent might tend to focus solely on exploitation based on the primitive policy. This behavior can lead to getting stuck in local minima and ultimately result in a decrease in the success rate. Nevertheless, with the incorporation of both a_r and a_s, our policy is better equipped to strike a balance between exploration and exploitation. This balanced approach allows the agent to navigate both the exploration of novel strategies and the exploitation of existing knowledge more effectively, leading to improved overall performance.
> **Q4: The method assumes full point cloud observation which may limit its application in the real world.**:
Thank you for bring this up. Due to the page limit, please refer to [Q4 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8).
> **Q5: The qualitative results did not show how the proposed method adapts to different wrist poses relative to the object.**:
Thank you for bring this up. We have conducted additional experiments to demonstrate the adaptability of our method. Due to the page limit, please refer to [Q2 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8). | Summary: This paper introduces a novel task called human-assisting dexterous grasping, which aims to train a policy for controlling a robotic hand's fingers to assist users in grasping objects. Unlike conventional dexterous grasping, this task is more complex as the policy must adapt to diverse user intentions and the object's geometry. The proposed approach consists of two sub-modules: Grasping Gradient Field (GraspGF) and a history-conditional residual policy. GraspGF learns 'how' to grasp by estimating the gradient of a synthesized success grasping example set, while the residual policy determines 'when' and at what speed the grasping action should be executed based on the trajectory history. Experimental results show that the proposed method outperforms baselines in terms of user-awareness and practicality in real-world applications.
Strengths: This paper's strengths can be outlined as follows:
1. Introduction of a unique dexterous grasp task involving shared autonomy between humans and robots, a topic not extensively explored in prior research.
2. Application of the Denoising Score Matching method to the grasping task.
3. Explicit representation of robot finger velocity.
5. A thorough acknowledgment of the system's limitations, including the requirement for a complete point cloud.
Weaknesses: However, the paper also has some drawbacks:
1. The proposed method is better suited for teleoperation settings compared to the reinforcement learning (RL) baselines used in the experiments. It is essential to include comparisons to teleoperation methods without assisted grasping, both qualitatively and quantitatively.
2. The paper's presentation could be enhanced. For instance, the individual images in Figure 2 could be better explained, as it is currently difficult to comprehend and not highly informative.
3. The residual policy, which corrects the primitive policy's action, does not consider the primitive policy action as input. This seems illogical for predicting velocity and bias terms without knowing the direction.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Regarding $r_{sim}$ in Equation 7, more clarification on its functionality would be helpful. Additionally, it would be beneficial to know the inference speed for each module. For human-assistance, quick response times are crucial for seamless human interaction. Since visual modules are utilized, a profiling analysis may be necessary. I am happy to raise the score if the concerned are addressed adequately.
## After Rebuttal
------------------------
The author response looks great to me. Some of the presentation issues are also addressed during the rebuttal phase. I agree with the author that this is more focus on assisting upper limb amputees with prosthetic hands instead of assisting normal persons. I would like to raise the score and glad to see it is accepted.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: To address the weaknesses, the paper's authors could improve the presentation by creating more self-contained figures. Furthermore, the hand/object in several images, such as Figure 3, is too small to clearly discern the interaction patterns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: The proposed method is better suited for teleoperation settings compared to the reinforcement learning (RL) baselines used in the experiments. It is essential to include comparisons to teleoperation methods without assisted grasping, both qualitatively and quantitatively.**:
Apologies for the confusion. To clarify, the main motivation and practical application of assisting grasping is assisting upper limb amputees with prosthetic hands instead of assisting normal persons, as show in **Figure 1 of main pdf**. Traditional teleoperation method[1] is unsuitable for assisting upper limb amputees to grasp, because we can not get information of human finger. In real-world experiments, we use teleoperation solely to get human wrist poses for mimicking grasping by upper limb amputees. We will elaborate the difference of teleoperation and human-assisting grasping to the introduction section of main pdf in the future revision.
> **Q2: The paper's presentation could be enhanced. For instance, the individual images in Figure 2 could be better explained, as it is currently difficult to comprehend and not highly informative.**:
Apologies for the confusion. We will add additional explanations and details to the pipeline shown in Figure 2 of the main PDF. This will include more detailed annotations that explicitly indicate how the primitive policy's actions are utilized as input for the residual policy. We will also provide further illustration of the functionality of the two components, as well as more compact and clear element in each image. Thanks again for your valuable suggestion!
> **Q3: The residual policy, which corrects the primitive policy's action, does not consider the primitive policy action as input. This seems illogical for predicting velocity and bias terms without knowing the direction.**:
Thanks for reminding this. We would like to clarify that the residual policy does take the primitive policy's action as an input. The process involves the primitive policy initially taking the joint state $J_t$, object point cloud $o_t$, and wrist pose $b_t$ as inputs to generate the primitive action $a^p$. Subsequently, the residual policy takes both the primitive action $a^p$, joint state $J_t$, object point cloud $o_t$, and wrist trajectories $H_t$ as inputs to produce the residual actions $a^s$ and $a^r$.
We are sorry for the confusion by simplification of notation in the initial presentation. In the revised version of our paper, we will provide a more detailed and accurate description of the relationship between the primitive policy's action and the residual policy. Additionally, in the revised version of Figure 2, we will include detailed annotations that explicitly indicate how the primitive policy's action is utilized as input for the residual policy.
> **Q4: Regarding $r_{sim}$ in Equation 7, more clarification on its functionality would be helpful**:
Thank you for pointing this out. As the primitive policy provides good guidance on "how to grasp," we introduce the $r_{sim}$ reward to encourage the final output action ${a^p} \odot {a^s} + {a^r}$ to explore in direction of the primitive policy's action $a^p$ , this reward is the inner product of $\frac {a^p}{{\parallel {a^p} \parallel}2}$ and $J_t$ - $J_{t-1}$, where $J$ signifies the state of finger joints. A higher value of $r_{sim}$ indicates a smaller angle between the gradient and $J_t$ - $J_{t-1}$, implying a greater similarity in the movement of finger joints to the gradient. Actually, we have already conducted an ablation study of $r_{sim}$ in **Section 4.1 of the Supplementary** to demonstrate how $r_{sim}$ facilitates the acceleration of the residual policy's learning process, and leads to a more effective utilization of the generalization capabilities of the primitive policy.
> **Q5: Additionally, it would be beneficial to know the inference speed for each module. For human-assistance, quick response times are crucial for seamless human interaction. Since visual modules are utilized, a profiling analysis may be necessary.**:
Thanks for bring up this concern. We evaluate the inference speed on the GTX 1650, which is also used in our real world experiment.
We set the batch_size equal to 1 and ran the policy 50 times to obtain a reliable average time for the inference speed of each module.
As shown in **Table 2 of the rebuttal pdf**, both modules take less than 0.004 seconds for each inference.
This indicates that our integrated system is capable of seamless human interaction.
**Reference**:
[1] Handa, Ankur, et al. "Dexpilot: Vision-based teleoperation of a dexterous robotic hand-arm system."
---
Rebuttal Comment 1.1:
Title: After Rebuttal Reviewer Response
Comment: The author response looks great to me. Some of the presentation issues are also addressed during the rebuttal phase. I agree with the author that this is more focus on assisting upper limb amputees with prosthetic hands instead of assisting normal persons. I would like to raise the score and glad to see it is accepted.
---
Reply to Comment 1.1.1:
Title: Thank You!
Comment: Thanks for raising your rating to 6. We are so glad that our responses help address your concerns. Thanks again for all your valuable feedback! | Summary: This paper focuses on addressing a task called human-assisting dexterous grasping. The aim is to create a finger controller to grasp objects with the robot's wrist conditioned on a human user's wrist. The authors propose 1) a Grasping Gradient Field (GraspGF) which estimates the gradient of a synthetic grasping example, and 2) a residual policy achieved through reinforcement learning. Experimental results demonstrate the superiority of the proposed method over previous ones.
Strengths: The authors are tackling an interesting problem - guiding a robot hand to follow human wrist trajectories and utilizing a learnt finger controller to manipulate objects. This bears resemblance to teleoperation but only provides wrist information. I would appreciate further discussion on this aspect.
The authors introduce a score-matching-based method for learning a primitive policy and a residual policy to aid the primitive policy. This combines synthetic data with reinforcement learning to accelerate training and achieve better performance.
Authors have conducted a large number of real-world robotic experiments, showcasing the practical applicability of the proposed method.
Weaknesses: While I agree that human-assisting dexterous manipulation holds potential, it is concerning if this work only addresses the grasping task without considering other dexterous manipulation problems. What are the specific differences and motivations between an automatic dexterous grasping method and user-provided wrist? What is the practical application? If, as the authors suggest, grasping different parts meets varying needs, could the authors conduct experiments to demonstrate this? Or, stepping back, could the proposed method grasp the part that the user intends to grasp? Would it be possible to conduct experiments on it?
In Table 2, 'ap w/o coll' seems to achieve similar performance, and considering the increment from 55.6% to 56.5%, the residual policy seems not necessary.
The authors should continue to polish the paper. For instance, the subscript 't' in 'a' on lines 161, 163, and 165 lacks consistency. The formatting of Table 2 could also be improved.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What are the outputs of GraspGF when the hand is at different stages, such as when the hand is far from the object at t_0 and close to the object at t_n? What actions are produced in these instances?
Given the same initial state and wrist trajectory, can we achieve diversified results?
Observing Figure 4 and Table 1, there is not much difference between the seen and unseen conditions. Could the authors attempt to analyze this?
Other datasets, such as DexYCB, could provide human wrist trajectory and thus increase the current 200 trajectories.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: What is the tolerance for errors in wrist estimation? Often, people cannot carefully move their wrists or do not have a precise estimation tool like Leap Motion.
Further, Leap Motion has a requirement for a complete hand without occlusion, which make me doubt about the algorithm's ability to help people with hand disabilities. It might be beneficial to consider alternative wearable sensors for wrist SE3 estimation or additional vision algorithm for wrist pose estimation based on RGB input.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: This bears resemblance to teleoperation ...? ..differences and motivations between an automatic dexterous grasping method...? What is the practical application?**:
Thank you for bring this up. Due to the page limit, please refer to [Q1 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8).
> **Q2: ... it is concerning if this work only addresses the grasping task without considering other dexterous manipulation problems**:
Thank you for raising this concern. Grasping serves as the foundational skill in manipulation tasks[1] ,such as picking up the hammer is the first step in nailing. Developing a generalized grasping algorithm forms the basis for more intricate manipulation tasks.
In fact, our current framework can also be adapted for manipulation. For example, we can collect expert demonstrations for manipulation tasks and employ diffuser[2] for learning primitive policies. Then combines with the residual policy for further refinement during the manipulation process.
> **Q3: If, as the authors suggest, grasping different parts meets varying needs...could the proposed method grasp the part that the user intends to grasp? ...**:
Thank you for bring this up. We have conduct additional experiments to demonstrate the adaptability of our method. Due to the page limit, please refer to [Q2 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8).
> **Q4: In Table 2, 'ap w/o coll' seems to achieve similar performance...**:
Thank you for bring this up. We would like to clarify that residual policy is necessary. Due to the page limit, please refer to [Q3 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8).
> **Q5: The authors should continue to polish the paper...**:
Thank you for your suggestion. We will review the formatting of the main pdf and ensure the consistency of the notation in the future revision.
> **Q6: What are the outputs of GraspGF when the hand is at different stages...**:
The primitive policy's actions are solely depend on object-wrist relative orientation. Thus when the hand is far away, the primitive policy will still close the fingers, as shown in **Figure 3 (a) Stage1 of the rebuttal pdf**.
As the hand's posture progressively approaches the target grasp pose, the mean value of the primitive policy's actions decrease, as shown in **Figure 3 (a) Stage2 of the rebuttal pdf**.
To further understand action of residual policy, we utilize a measure called "$r_{sim}$". Higher $r_{sim}$ value suggests the final action more closely follows the primitive policy. As shown in **Figure 3 (b) Stage1 of the rebuttal pdf**, when the hand is far from the object, the residual policy will restrict finger's easrly closure to prevent collision, as the hand approach the object, residual policy start to follow primitive policy, as shown in **Figure 3 (b) Stage2 of the rebuttal pdf**. However, as shown in **Figure 3 (b) Stage3 of the rebuttal pdf**, as the hand is about to grasp the object, the "$r_{sim}$" start to decrease. At the last few steps, reisudal policy will further refine the pose to hold the object firmly, leading to the negative "$r_{sim}$" value.
> **Q7: Given the same initial state and wrist trajectory, can we achieve diversified results?**:
We would like to clarify that our module is deterministic, adhering to the standard implementation of PPO.
> **Q8: Observing Figure 4 and Table 1, there is not much difference between the seen and unseen conditions...**:
Filtered from unidexgrasp[3], the current grasp dataset involves 3000+ training objects, totaling 0.36 million grasp poses. We hypothesize that this comprehensive dataset sufficiently captures diverse data distributions. Consequently, the distribution of seen and unseen datasets might be i.i.d with the training dataset, results in low difference. This observation aligns with our findings from unidexgrasp.
> **Q9: Other datasets, such as DexYCB, could provide human wrist trajectory...**:
Thanks for your valuable suggestion. We actually use DexYCB as human trajectory dataset and extract trajectories from HandOverSim by default configurations. To increase the diversity, we have augmented the dataset by fusing pairs of trajectories (See **Section 1.1 of Supplementary**). Also, we could extract more human wrist trajectoies by changing the configurations of HandOverSim, as suggested.
> **Q10: What is the tolerance for errors in wrist estimation? ...**:
Thank you for bringing up this concern. To demonstrate the robustness of our method, we inject two levels of noise to the wrist pose observation following [4].
As indicated in **Table 3 in rebuttal pdf**, our approach yields comparable outcomes with a 2-degree/2-cm estimation error, while exhibiting approximately 10% reduction in performance under a 5-degree/5-cm error threshold, which indicate that our method can handle estimation error to some degree.
> **Q11: Further, Leap Motion has a requirement for a complete hand without occlusion...**:
Thanks for your valuable suggestion! We agree that occlusion is a major issue when deploying our system in the real world and it would be a great future direction. Actually, we are developing a system involves mounting a camera on the user's head and estimating the wrist pose using an instance-level pose estimation method.
**Reference**:
[1] Newbury, Rhys, et al. "Deep learning approaches to grasp synthesis: A review."
[2] Janner, Michael, et al. "Planning with Diffusion for Flexible Behavior Synthesis."
[3] Xu, Yinzhen, et al. "Unidexgrasp: Universal robotic dexterous grasping via learning diverse proposal generation and goal-conditioned policy."
[4]Chen, Hansheng, et al. "Epro-pnp: Generalized end-to-end probabilistic perspective-n-points for monocular object pose estimation."
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's rebuttal. After reading the reviews from other reviewers and the author's responses, I maintain my borderline accept rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. If you have any additional concerns, we are glad to address them. | Summary: This paper proposes a new task called assisting grasping. The main difference between this task and classical dexterous grasping is the wrist movement is controlled by a human instead of by the grasping algorithm. The authors propose a two stage method to solve this problem. First, they learn the grasping skill using a successful grasping dataset via score-matching loss. Then, they fine-tune this policy using RL in simulation. They show the proposed method is better than pure RL and score-matching is useful compared to imitation learning algorithms.
Strengths: (+) The authors formulate the learning from a set of successful grasps as a denoising problem, which is quite interesting and novel. I think this is an effective design choice.
(+) This paper proposes to separate finger target position and finger movement velocities as two stage problem. This design makes learning more efficient.
(+) The experiments are comprehensive. The authors shows how each of the component affects the final performance of the policy.
(+) It also demonstrates the method in the real-world.
Weaknesses: (-) My major concern of this paper is whether the proposed task is more challenging than classical grasping, as claimed by the authors. From my perspective, the grasping process can be roughly divided to 1) hand approaches the object and 2) finger closes. The proposed task use human teleoperation / predefined wrist trajectory for the approaching phase and only learn how / when the finger should grasp the object. In this sense, in terms of task difficulty, what is the difference from firstly moving the wrist to a close-enough position, and then grasp the object under a stationary wrist? Intuitively, I think this is an easier task.
(-) As motivated by my previous argument, there should be more logical arguments on the task difficulty if the author want to emphasize this task “presents a more complex challenge”.
(-) There are formatting issues in particular Table 2.
(-) It relies on perfect point-cloud model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As shown in Table 2, stage 1 results are already good if there is no collision. I’m curious if it’s possible to add a collision penalty loss in stage 1 training (similar to the \delta h in RL training) to improve the stage 1 policy?
Is there a more elaborated arguments or evidences that why the proposed task is harder than grasping? On the website, the simulation results look like a classical grasping algorithm while the hand poses are almost the same for all real-world results.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper is unlikely to have potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: My major concern of this paper is whether the proposed task is more challenging than classical grasping...; As motivated by my previous argument, there should be more logical arguments on the task difficulty...; Is there a more elaborated arguments or evidences that why the proposed task is harder than grasping?**:
Thank you for bring this up. Due to the page limit, please refer to [Q1 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8).
> **Q2: There are formatting issues in particular Table 2**:
Thank you for your suggestion. We will reexamine the formatting of our paper in the future revision.
> **Q3: It relies on perfect point-cloud model**:
Thank you for bring this up. Due to the page limit, please refer to [Q4 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8).
> **Q4: As shown in Table 2, stage 1 results are already good if there is no collision. I’m curious if it’s possible to add a collision penalty loss in stage 1 training (similar to the \delta h in RL training) to improve the stage 1 policy?**:
Thank you for bring this up. We would like to clarify that incorporating a collision penalty loss in stage 1 does have the potential to enhance collision avoidance in grasp pose generation, but it does not specifically address collision avoidance during the grasping procedure. Due to the page limit, please refer to [Q3 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8).
> **Q5: Website simulation results look like a classical grasping algorithm?**:
Thank you for pointing this out. The wrist movement in the website simulation demonstration actually follows a pre-generated human-like trajectory, while the algorithm only controls the finger movement. However, in the classical grasping setting, the algorithm also governs the wrist movement. In order to more effectively illustrate the differentiation between classical grasping and human-assisting grasping, we will render wrist movement trajectories in the revised version of the videos.
> **Q6: While the hand poses are almost the same for all real-world results**:
Thank you for raising this point. We have also observed similar resemblances in grasp poses during our real-world experiments. We hypothesize that this phenomenon could be attributed to the sim2real gap, especially in the context of dynamic aspects. For example, we have discovered that the real hand fails to achieve precise poses when the applied action values fall below a certain threshold, resulting in more similar poses.
However, our policy still demonstrates the capability to achieve successful grasps across various human movement trajectories, indicating the potential of our approach for real-world applications. To effectively showcase the adaptability of our method, we have undertaken other simulation-based experiments, as detailed in [Q2 of Common Response](https://openreview.net/forum?id=fwvfxDbUFw¬eId=DtPbwqFxR8).
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. My questions are well-addressed and I will keep my original rating.
---
Reply to Comment 1.1.1:
Title: Thank You!
Comment: We are so glad that our responses help address your concerns. Thanks again for all your valuable feedback! | Rebuttal 1:
Rebuttal: ## **Common Response**:
We thank all reviewers for appreciating our ideas and experiments. “A unique dexterous grasp task **(PG5Z)**". "The proposed framework is intuitive and is properly designed for the challenges of its task **(GtH3)**". "Formulate the learning from a set of successful grasps as a denoising problem, which is quite interesting and novel **(ju1F)**". "Demonstrated strong performance in both simulated and real-world experiments **(GtH3)**". "Showcasing the practical applicability of the proposed method **(KjWd)**".
However, we notice that some reviewers **(PG5Z, KjWd, ju1F)** might confused about the **(Q1) Distinction between Teleoperation, Automatic Dexterous Grasping (Classical Grasping), and Human-Assisting Grasping**.
The main motivation and practical application **(PG5Z, KjWd, ju1F)** of assisting grasping is assisting upper limb amputees with prosthetic hands, as show in **Figure 1 of main pdf**.
- **Assisting Grasping v.s. Teleoperation.**
Traditional teleoperation method[1] **(PG5Z, KjWd)** is unsuitable for assisting grasping, because we can not get information of human finger. In real-world experiments, we use teleoperation solely to get human wrist poses for mimicking grasping by upper limb amputees.
- **Assisting Grasping v.s. Automatic Dexterous Grasping .** Compared to automatic dexterous grasping **(KjWd, ju1F)**, agent can not control the wrist of the prosthetic hand in assisting grasping, which poses challenge for user-aware grasping. Due to the complex and diverse behaviours when human controls the wrist, deciding how and when to grasp is challenging for agent without considering the human movement. For instance, if the human first moves the wrist close to the object and then grasp, the agent still must factor in the object-wrist relationship for how to grasp. On the other side, grasping with a fixed wrist pose instead of close the finger in advance can result in table obstructs finger closure, requiring humans to adjust their wrist. This affects grasp fluency and introduces burden, a crucial consideration for humans. Moreover, humans may face dynamic objects, as shown in **Section 1 of the anonymous project page**. Ignoring human movement could result in failed grasps.
We also noticed that some reviewers concerned about **Q2: The Necessity of Adaptability, and the Adaptability of Our Proposed Method (KjWd, GtH3, ju1F)**.
- **Necessity of Adaptability** It's crucial to grasp different parts **(KjWd)** of objects in daily life, as many objects serve multiple purposes. For example, a hammer can be used for nailing and sweeping. Adaptability also requires for same purpose. When cleaning shoes, human need to hold different parts of shoes. Moreover, objects may be put in different places by various poses, which makes grasping a specific part challenging.
- **Adaptability of Our Proposed Method** **(KjWd, GtH3, ju1F)** Quantitatively, we've evaluated "posture" measuring alignment between ours and intended grasp poses, as shown in **Figure 4 of the main pdf**. on the other side, as shown in **Table 1 of the rebuttal pdf**, our method excels at grasping various parts of objects. Qualitatively, **Figure 1 of the rebuttal pdf** shows human's intended grasp and our method's close-to-ground-truth pose. In another experiment, as shown in **Section 1 of the anonymous project page**, GraspGF produces diverse grasp poses as the wrist moves.
We futher noticed that some reviewers **(KjWd, ju1F)** might confused about **Q3: Distinction Between Collision in Grasp Pose Generation and Collision in Grasping Procedure**.
There are two collision types in our current problem.
- **Collision in Grasp Pose Generation** The first arises during grasp pose generation, e.g., "how to grasp". Generated grasp poses can sometimes result in object collisions. To address this, we can heed the reviewer's advice **(ju1F)** to include a collision penalty, potentially improve the primitive policy's performance.
- **Collision in Grasping Procedure** This collision relates to "when to grasp" regardless of direct object collision in the generated pose. Even if a pose has no collision with objects, collisions can still occur when the agent control fingers to reach the desired pose—colliding with the table or object itself.
- **Necessity of Residual Policy** To clarify, results of "$a^p$ w/o coll" **(KjWd, ju1F)** are obtained by disabling collisions in grasping procedure. **Figure 2 of the rebuttal pdf** highlights that while the final grasp pose can grasp the object, poses are achieved before the agent intends to grasp. This shows a residual policy **(KjWd)** is crucial to decide the ideal grasp timing.
Some reviewers **(ju1F, GtH3)** also highlights **Q4: Limitation of Perfect and Full Point Cloud**.
We have discussed in **Section 7 of the main pdf**. Addressing this concern can involve using teacher-student learning[2] or adapting our pipeline to be trained with partial point cloud input **(ju1F, GtH3)**. For handling the gap between real point cloud and sim point cloud, we may add noises[3] to point cloud observations for training **(ju1F)**.
We sincerely hope our work contributes to the Machine Learning + Robotics research community and eventually improve the social welfare. Below we reply to reviewers’ questions point-by-point. Thanks again for your valuable comments and suggestions!
**Reference**:
[1] Handa, Ankur, et al. "Dexpilot: Vision-based teleoperation of a dexterous robotic hand-arm system."
[2] Chen, Tao, Jie Xu, and Pulkit Agrawal. "A system for general in-hand object re-orientation."
[3] Dai, Qiyu, et al. "Domain randomization-enhanced depth simulation and restoration for perceiving and grasping specular and transparent objects."
Pdf: /pdf/62ce6b5f1e33d830f8bf73f335c1eb10d0b0984f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Sensitivity in Translation Averaging | Accept (poster) | Summary: The paper proposed a method to efficiently remove view triplets from a pose graph where the minimum angle falls below a threshold. This is relevant to global SfM algorithms, where such triplets (i.e., triangles) lead to highly uncertain translation scale estimation. The method's performance has been demonstrated on the 1DSfM dataset.
Strengths: - The proposed method is a nice solution to a simple idea. Even though I feel the description has been written in an overcomplicated way, I like the idea and the method.
- The problem is relevant to global SfM methods where the translation averaging part is currently one of the main bottlenecks of achieving good accuracy.
- The proposed solution is ~100 times faster than the trivial brute-force solution.
Weaknesses: I have two major problems with the proposed method, and both are regarding the provided experiments:
- First, the proposed method is not compared to any other filters (e.g., 1DSfM), which really makes it hard to understand how much it actually improves the SOTA (if it does). I know that the proposed solver can be easily combined with 1DSfM, but I am not convinced it leads to noticeable improvements in that case. To understand this, the authors should provide results of other filters with and without the proposed one.
- Second, the authors use the 1DSfM dataset in their experiments where the addressed problem of skewed triangles is not really present. This can be seen in Table 2, which shows that actually only a few edges are removed from the pose graphs (it is surprising that removing such few edges can lead to an improvement in the final global SfM accuracy). I think it would make more sense to showcase the method on datasets where SfM is actually often failing due to the coinciding translation directions, e.g., on KITTI, which presents the trajectory of a moving vehicle. There, scale estimation is challenging. On the 1DSfM dataset, most global methods work reasonably well. Making global SfM work on such cases would have an impact.
Without comparison to baselines, the paper is a clear rejection to me. In case such a comparison is provided, I am willing to improve my rating.
Minor things/comments:
- The brute-force solution's timing should be compared to the proposed method. I know that the authors mentioned this in the text, but it would be helpful to show the actual run-time, given that the speed-up is the main contribution of the paper, while accuracy-wise, the brute-force method should be similarly good.
- L109 "being the rotation axis angle" -> "being the rotation axis and angle respectively"
- Fig.1 How can the minimum angle in a triangle be close to 180°? Maybe I misunderstood the figure.
- L294 BATA is first mentioned here (aside from a half sentence in the introduction). It should be written down that BATA is used for estimating the global positions from the directions.
- Eq.2 is quite something that could be visualized by a figure and it would help a lot in understanding it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I don't call this a major issue, but it is definitely something that should be discussed in the manuscript. It is unclear why the authors removed skewed triangles instead of deleting a single edge. The choice of triangle removal should be justified.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Results with 1DSfM filter: 1DSfM filter is designed to remove outlier edges while our method removes skewed triangles, which are two distinct aspects of the problem. We apply 1DSfM and then compare the solutions without and with applying our filter on real data. In Table R1 of the rebuttal pdf, we provide the comparison. It can be seen that applying both the 1DSfM filter and our filter leads to the overall best accuracy. Moreover, the improvements are significant for the datasets where the performance is better, while the accuracy is almost similar for the datasets where improvement is not seen. As noted earlier, since both outliers and skewed triangles are distinct issues, combining filters for both types leads to improved accuracy.
2. 1DSfM dataset for evaluation: In Fig. R2 of the rebuttal pdf, we show an illustrative example of the existence of such skewed triangles in the SfM problem on Alamo, which is a part of the 1DSfM dataset. Occurrences of skewed triangles are common in SfM, which is depicted in Fig. 1 of the main paper, where the scatter plots show many triplets having minimum angles of the triangles (represented on the x-axis) close to zero (lines 274-276 of the main paper). We adopt a non-aggressive strategy to prune the network (lines 238-241 of the main paper) where we retain all the edges of the network ($\mathcal{E}_{ret}$), which are a part of non-skewed triangles. So, even if an edge is a part of both non-skewed and skewed triangles, it will be retained. In this process, only those nodes are retained, which are a part of atleast one triplet which belongs to the set of non-skewed triangles (lines 244-248 of the main paper). Thus, relatively few edges are removed compared to the total edges participating in the skewed triangles. In cases where the directions coincide, like KITTI, the network is not parallel rigid, which means no unique solution exists. Thus, translation averaging cannot be used in such scenarios.
3. Time comparison with brute-force method: The timing of the brute-force method will be added in the supplementary material.
4. Minimum angle close to $180^{\circ}$: Fig. 1 of the main paper shows the scatter plots on real data, which contains outliers. Some triplets containing outliers do not form a triangle, and directions in such triplets are far away from being on a plane. Such triplets do not satisfy the constraint of a triangle, i.e. the sum of angles equal to $180^{\circ}$.
5. Visualizing Eqn. 2: In Fig. R1 (in the rebuttal pdf), we show an intuitive explanation of the conditioning of a triangle under different scenarios. For a well-conditioned triangle, where no angle is close to zero, a small change in direction (green to red) leads to a small change in the absolute translation. But for an ill-conditioned triangle, where atleast one angle is close to zero, a small change in direction leads to a large change in the absolute translation.
6. Removal of skewed triangles: An edge can participate in more than one triplet. When a triangle is identified as skewed, it is marked for removal. Specifically, identifying which edges are leading to a skewed triangle can be time consuming. We adopt an efficient approach where we classify triangles as skewed or non-skewed. Since an edge can participate in both non-skewed and skewed triangles, the intersection of edges participating in the two classes of triangles is not empty. Now, we can remove the edges either participating in skewed triangles (aggressive) or retain the edges participating in non-skewed triangles (non-aggressive) (lines 238-241 of the main paper). We employ the non-aggressive removal of edges (lines 244-248 of the main paper). So, in effect, we only delete edges without explicitly maintaining a set of edges participating in skewed triangles and do not delete all the edges of the skewed triangles.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' answers. However, I am still not too convinced by the experiments. There are many other sequential datasets that the authors could use. For example, on EuRoC, global methods tend to work reasonably well. Also, there the camera poses could be directly compared to a GT and not only to a COLMAP reconstruction that might fail in some cases.
---
Reply to Comment 1.1.1:
Comment: Thank you for the comment. We could not find any resource which shows results using translation averaging on sequential datasets, including EuRoC. So, we tested translation averaging using BATA [56] on the EuRoC MAV dataset and also tried our method on it, following the same experimental procedure as described in Sec. 6 of the main paper. For the "Machine Hall 01" sequence, the mean and RMS errors without our filter are 1.62 m and 1.72 m, respectively, while after using our filter, they reduced to 1.09 m and 1.42 m, respectively. For the "Vicon Room 1 02" sequence, the mean and RMS errors without our filter are 0.79 m and 0.88 m, respectively, and after using our filter, they reduced to 0.67 m and 0.77 m, respectively. Although there is improvement using our filter, the errors in camera trajectories both before and after applying our filter are very high compared to the incremental methods used in SLAM, where the error magnitude is of the order of 0.01 m (ORB-SLAM2 [Mur-Artal \textit{et al.}, 2017]). Moreover, after visual inspection, we found that the trajectories look very different when estimated from translation averaging compared to the provided ground truth. These indicate that the scale estimation in sequential datasets is intrinsically complex using global translation averaging methods. | Summary: This paper analyzed the sensitivity problem in translation averaging. Built upon the theoretical analysis of the skew triangles in a bearing network, this paper also proposes an efficient algorithm to identify and remove edges that can make the translation averaging problem ill-conditioned. The proposed algorithm is integrated into a global SfM pipeline. The experiments are conducted on the 1DSfM dataset, with different translation averaging solvers (RevisedLUD and BATA) used to solve for global positions. The quantitative results show the proposed algorithm effectively reduced the condition number of the angle matrix, and the translation errors are reduced consistently.
Strengths: This paper analyzed the sensitivity problem in translation averaging, which is different from the parallel rigidity problem and has not been considered before. Strict proofs are given for the theorems proposed in the paper. The proposed edge filtering algorithm is very efficient and effective.
Weaknesses: The main limitation is that the analysis and the proposed algorithm are based on the triplets in the graph, where the requirements are not always held in the real world. The sensitivity analysis in the translation averaging problem is useful, however, is less important than the parallel rigidity problem in my point of view. The paper gives sufficient quantitative results to support their method but is lack qualitative results.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I have no more questions regarding the details of the proposed method. My question is on the experiment part. The authors evaluated their method on the 1DSfM dataset, which has been always used in many structure-from-motion methods. However, I think the dataset is out of date though it is very popular a decade ago. I would like to know did the authors evaluate their method on other more realistic datasets, such as some SLAM datasets and aerial datasets.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is that the proposed algorithm is based on the triplets in the graph. I would like to see an improved version of this paper can be extended to more general graphs. For now, I have to say the method contributes not significantly to the NeurIPS community since it has limited application and is less important than the parallel rigidity analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Dataset for evaluation: We use 1DSfM dataset for evaluation. We recompile the input and the ground truth using Colmap (lines 251-254 of the main paper) to get a more reliable reconstruction than the one provided using Bundler. For any dataset which is sequential in nature, like SLAM or aerial datasets, the parallel rigidity of the network is not maintained in general. For instance, estimating scales during colinear translations in SLAM is not feasible based solely on directions as input since infinitely many solutions exist, implying that the network is not parallel rigid and, thus, translation averaging cannot be applied. Also, if the parallel rigid network is extracted in such datasets, the maximal network obtained will be very small compared to the full network, which will make the problem unrealistic.
2. Analysis based on triplets of graph: In general graphs, many different structures occur in 2D and 3D (apart from triangles and structures derived from triangles), each needing an individual analysis in terms of conditioning of the problem. Also, once the properties of each of those structures are known, it will be a combinatorial problem to analyze the sensitivity of the graph based on these structures since general graphs will be a combination of those structures. Considering triplets of a network has many advantages. At first, we only get a single structure i.e. a triangle which can be used as a building block of the network. Next, it is relatively easy to analyze the sensitivity of a triangle since it is planar (Thms. 1 and 2) and then extend it to a triplet network (Thm. 3). Lastly, we can ensure parallel rigidity based only on the network structure (Thm. 4), making it efficient for practical usage. We take a first step to deal with the issue of sensitivity in translation averaging, and using triplet graphs for analysis has shown many insights into the problem.
3. Importance of sensitivity analysis: Sensitivity analysis deals with the reliability of the solution based on the input directions, while parallel rigidity addresses an important issue of the uniqueness of the solution. Sensitivity analysis is similar in spirit to the conditioning of a matrix while solving a linear system of equations ($Ax=b$ problem) where the reliability of a solution is studied. Parallel rigidity can be equivalently described with the algebraic rank of a specific matrix [2], which is similar to the analysis of the uniqueness of a solution given a matrix obtained from a linear system of equations. Both are different aspects of translation averaging that need to be taken care of. In our experiments, we have considered only parallel rigid graphs (lines 256-257 of the main paper). In Tables 1 and 3 of the main paper, it can be seen that removing the triangles which are skewed leads to improvements in the translation estimates. Moreover, the nodes which were removed due to the removal of skewed triangles are not estimated properly, as seen in the column ``Removed Node Errors" of the tables. Such filtering also leads to improvement of the reconstructions and faster convergence of bundle adjustment (Table 4 of the main paper). So, we believe that sensitivity analysis is an important aspect of translation averaging, which is as important as parallel rigidity and outlier detection.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks to the authors for the rebuttal. Most of my concerns are addressed. I decide to improve my score. The main paper or supplement should include visualizations and further discussion in the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you for the comment. We will incorporate the visualizations and further discussion in the main paper and supplementary material. | Summary: The translation averaging problem is considered, i.e recover absolute translations from pairwise relative translation directions. The paper focuses on analyzing the change in solution with small changes in the input relative directions. The smallest problem (3 nodes) is initially considered which allows to understand that skewed triangles (triplets) are problematic. In order to move to a general problem, the authors propose to consider the case of a network containing only triplets. It allows them to defined the conditioning of the translation averaging problem as the condition number of the "angle matrix". They prove a problem is well conditioned if the minimum angle in all triangles are sufficiently large. Thus, they propose a method to remove such triangles. They experimentally demonstrate that including the proposed method within a classical sfm pipeline allows to obtain better absolute translations, as well as more 3D points triangulated and faster BA.
Strengths: 1. Analyzing the sensitivity is a non-trivial problem.
2. Several theorems are proven that allow to identify skewed triangles as problematic.
3. The theorems are important to motivate the proposed skewed-triangle-removal algorithm.
4. The experiments consider both the case of outlier free data and real data.
5. The paper is well written and easy to read.
Weaknesses: The proposed algorithm consists in removing information to obtain a problem that is better conditioned. But it is said in the experiments that the skewness of a triangle is not related to the presence of outliers. So if I understood correctly, the removed triangles are not necessarily outliers and thus may contain important information. Thus the proposed algorithm may harm the final reconstruction. Could you please comment on this?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please see "weaknesses"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Important information in removed triangles: The triangles which are filtered using our method are not necessarily outliers. From Tables 1 and 3 of the main paper, it can be seen that the error of the removed nodes are high (Removed Node Errors column) compared to the errors of all the cameras, even if there is no outlier (Table 1), which indicates that the removed nodes are not estimated properly. Since the removed nodes are not reliable, removing such nodes improves the reconstruction, which is validated by more points triangulated and fewer iterations of bundle adjustment in Table 4 of the main paper. | Summary: The authors propose the sensitivity theory for the Translation Average problem (i.e., input is a large number of coordinate vertex point pairs in relative directions observations and the output is the absolute vertex coordinates with consistent scales), which can be used to efficiently identify the inputs that would make the problem ill-condition (i.e., too small relative direction angles and too large uncertainty to form valid constraints) and remove them to improve the overall algorithm accuracy and convergence speed.
Strengths: 1.The paper has solid theory, elegant derivation, and creative use of mathematical tools to analyze its uncertainty.
2.The experiments fully confirm the authors’ claim which effectively improve the accuracy of the algorithm and accelerate the convergence speed.
Weaknesses: 1.The motivation of this article is not sufficient, I can understand that by comparing the direction of the angle we can determine whether it is an ill condition problem or not, but I do not know whether this situation often occurs in practical applications (from the experimental point of view, the improvement of accuracy is very limited).
2.The article is rather obscure and difficult to understand, especially section4 and section5.
3.Lack of qualitative experiments to demonstrate the importance of such problems and the superiority of the method.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1.I completely missed the meaning of what is shown in Figure 1, could author reintroduce it
2.Whether this uncertainty can be explicitly modeled as uncertainty of Gaussian distribution and incorporated into the optimization framework for solving absolute vertex positions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: 1.I think the biggest problem of the paper is that this problem is not a very important problem, first of all, there must be similar methods inside other SFM works to avoid the uncertainty caused by too small parallax. Secondly, whether this extreme case is more common in real scenes is doubtful. Thirdly, if a vertex is observed by more than one other vertex, then the observation in other directions can be used to constraint the absolute vertex solution. So, the authors should explain again why it is an important question which I think may be proved using qualitative experiments.
2.Some parts of the paper are too obscure, it is suggested to add more intuition descriptions, and visual diagrams to aid understanding.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Fig. 1 of the main paper: We analyze the real data to understand how frequently skewed triangles occur in real data and whether skewed triangles have any relation to the presence of outliers. For this, we provide scatter plots between the minimum angle in a triangle on the x-axis (which reveals the skewness of the triangle, a smaller minimum angle means a more skewed triangle) and the maximum error of the directions in the triangle (which reveals the presence of outlier, a larger maximum error means the presence of outlier). It can be seen that there are many triplets whose minimum angle is close to zero which reveals that the presence of skewed triangles is common (lines 274-276 of the main paper). Also, the scatter plots imply that there is no relation between the skewness of a triangle and a presence of an outlier in a triangle.
2. Uncertainty modelling: Given the nature of input (relative directions) and output (absolute translations), modelling uncertainty on absolute translations as Gaussian may not be reflective of the true uncertainty. Even then, if it is modelled as a Gaussian and incorporated into the optimization routine small inaccuracies in the uncertainty estimates can lead to a drastic change in the output when the triangles are skewed, as an implication of Thm. 1, which is undesirable.
3. Small parallax and occurrence of skewed triangles: Although small parallax is handled in SfM pipelines which could lead to skewed triangles, the existence of skewed triangles is not limited to small parallax. As shown in Fig. R2 (in rebuttal pdf), even cameras with large baselines can lead to skewed triangles. Fig. 1 (of the main paper) also discloses the same where there are many triplets having the minimum angle of the triangle close to zero, denoting the existence of skewed triangles (lines 274-276 of the main paper).
4. Vertex constraint: If a vertex is constrained by atleast two other vertices, which is always the case for parallel rigid networks, we can get a perfect solution when there is no noise in the directions and no numerical errors. But, in practice, the directions are noisy and rounding-off or truncation errors during computation can lead to errors in the solution of the vertex. This is similar in spirit to the conditioning of a matrix while solving a linear system of equations ($Ax=b$ problem), where numerical issues arise when a matrix is not well-conditioned. Thus, constraining the vertex is not adequate to obtain a reliable solution.
5. Intuitive descriptions: Fig. R1 (in rebuttal pdf) shows the conditioning of the triangle under different scenarios, which provides examples of well-conditioned and ill-conditioned triangles. Fig. R2 shows an illustrative example of the existence of such skewed triangles by considering the Alamo dataset, where two different cases of ill-conditioned triangles (one small angle and two small angles in a triangle) are shown.
6. Qualitative results: In Figs. R3 and R4 of the rebuttal pdf, we provide some qualitative results. In Fig. R3, the arch in the Notre Dame reconstruction is misplaced due to improper translation estimates, as seen in the lateral view. Using our filter removes such a misplaced arch. In Fig. R4, the wall in the Piazza del Popolo reconstruction is misplaced, but such an effect is not seen after applying our filter. This reveals that removing skewed triangles improves the reconstruction quality.
---
Rebuttal Comment 1.1:
Comment: "I think the biggest problem of the paper is that this problem is not a very important problem, first of all, there must be similar methods inside other SFM works to avoid the uncertainty caused by too small parallax. Secondly, whether this extreme case is more common in real scenes is doubtful."
The reviewer is mistaken. The fact that translation averaging is extremely challenging is the reason why global Structure-from-Motion methods are not really used in practice. They are not as good as incremental approaches due to the complexity of translation averaging (i.e., the missing scale information makes the problem very hard). Solving translation averaging accurately would unlock practical global SfM pipelines (instead of incremental ones) reducing the 3D reconstruction processing time by orders of magnitudes. The paper is a new (however, small) step towards this. | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments. In this section, we provide descriptions for the table and figures presented in the rebuttal pdf (which have the prefix ``R" in their enumeration) and address individual concerns in the individual rebuttal sections.
1. Results with 1DSfM filter (Table R1): Our method removes skewed triangles, while 1DSfM is designed to remove outlier edges, both of which are two distinct aspects of the problem. We apply the 1DSfM filter on real data and then compare without and with our filter in Table R1 using BATA [56]. It can be seen that after applying the 1DSfM filter and then our filter, the translations are estimated in a better manner. Moreover, our filter + 1DSfM filter removes the nodes which are poorly estimated than only 1DSfM filter, as seen from removed node errors in the unfiltered network in Table R1. Since both outliers and skewed triangles are different issues, combining filters for both types improves accuracy.
2. Practical applicability (Fig. 1 in main paper and Figs. R1, R2): In Fig. R1, we show conditioning of the triangle under different scenarios, with green and red depicting the unperturbed and perturbed directions, respectively. For a well-conditioned triangle, a small change in direction leads to a small change in the absolute translation. But for an ill-conditioned triangle, a small change in the direction leads to a large change in the absolute translation. Fig. R1(a) shows a well-conditioned triangle, Fig R1(b) shows a triangle with one small angle due to which it is ill-conditioned and Fig R1(c) shows a triangle with two small angles making it ill-conditioned (these conditions are also inferred by Thm. 1). Fig. R2 shows an illustrative example of the existence of such skewed triangles in the SfM problem. For the Alamo dataset, most images capture the front part of the museum, and thus, the cameras are densely connected in the network. The BLUE triangle depicts a triplet which is Type-I ill-conditioned triangle and the GREEN triangle shows Type-II ill-conditioned triangle. In Fig. 1 of the main paper, it can be seen from the scatter plots that there are many triplets whose minimum angle (represented on x-axis) is close to zero (lines 274-276 of the main paper). This reveals that the existence of skewed triangles is very common in unordered image collections and thus affects the ability to carry out accurate reconstructions.
3. Qualitative results (Figs. R3, R4): We provide some qualitative results in the rebuttal pdf. In Fig. R3, the arch in the Notre Dame reconstruction is misplaced due to improper translation estimates, as seen in the lateral view. Using our filter removes such a misplaced arch. In Fig. R4, the wall in the Piazza del Popolo reconstruction is misplaced, but such an effect is not seen after applying our filter. This reveals that removing skewed triangles helps get more accurate reconstructions, which is also revealed by more points triangulated for most datasets, as shown in Table 4 of the main paper.
Pdf: /pdf/32650f07e195d5024f1982c78a6981836c9ce575.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Demo2Code: From Summarizing Demonstrations to Synthesizing Code via Extended Chain-of-Thought | Accept (poster) | Summary: This work presents a recursive method to summarize demonstrations into programs through LLM. The idea is interesting in that it uses spec as the bottleneck to connect complex demonstrations and complex robot task code, encoded and decoded through chain of thoughts. The method is evaluated on three different benchmarks involving table-top manipulation, novel kitchen text tasks, and EpicKitchen. The method outperforms naive language-to-code baselines and can generalize to longer-horizon tasks as well as learning user intents.
Strengths: 1. The problem is challenging in that demonstrations and codes are both complex: demonstrations have lots of details and multimodality, yet codes are abstract and need to follow strict requirements.
2. The method is sound: it uses divide and conquer to tackle some limitations of the current LLM.
3. The new benchmark can be interesting to researchers that want to attempt the high-level planning problem in kitchen tasks.
4. The figures and the pseudo codes are helpful
5. The method generalize to longer-horizon tasks as well as learning user intents.
Weaknesses: 1. Have several concerns on the evaluation metrics
2. Need more details on discussing the tabletop benchmark, the EpicKitchen experiment and the new proposed benchmark. Looks like epic kitchen is closer to diverse raw data such as Youtube and yet the new kitchen simulator and the table-top tasks has more predicates as well as low-level relationships
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The formulation of MDP seem a bit unnecessary to me.
2. The spec is called “latent” multiple times in the paper, but shouldn’t it be clearly defined for each task?
3. To confirm on the evaluation metric: Is the unit-test pass rates the same as task success rates? I.e. does the generated code satisfy a set of instructions (task) defined with the demonstrations? Or is it more narrowed?
4. What is the quality of the spec2code, and why matching it is a necessary metric? It seems to me that there are many ways to write code for each task.
5. Is hallucination a problem? Because the method introduces such a pipeline instead of end-to-end, is it possible that the LLM introduces unnecessary steps in the instruction summary stage and then introduce more functions to complete those substeps in the expanding stage?
6. What is the reference code in EpicKitchen dataset in line 190. And what is the user scoring process?
7. Given that there could be multiple ways to do the breakdown and the LLM are not deterministic, is stochasty a concern? It seems that variance is not provided in the table for each experiment.
8. Have the authors tried the new GPT3.5 model with longer-context or GPT-4 model?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the questions which help us improve our paper's clarity. We are excited that the reviewer acknowledges how our approach solves a challenging problem and Robotouille as a promising benchmark. Please see below for our answers to the questions:
### Questions
#### **Q2: Why is "latent task specification" not clearly defined for each task?**
We wish to clarify the sentence (line 30-31): “while demonstrations are long and code is complex, they both share a latent task specification that the user had in mind.”
We use "latent task specification" because:
- “Latent” evokes the analogy with an encoder-decoder architecture that encodes demonstrations with the summarizer and decodes into code with the code expansion module.
- "Task specification” refers to the detailed language description of how the task should be completed.
This specification is not defined for each task because we assume that the user doesn't need to provide detailed instructions on how to complete a task. Instead, Demo2Code is responsible for "encoding" the demonstrations and generating this latent task specification.
#### **Q3: Is the unit-test pass rates the same as the execution success rate?**
No, they are different.
- The execution success rate shows whether a code can be executed in the simulator without any error.
- The unit-test pass rate is whether a code has completed the task in the same way that the user wants. The unit test checks whether the code has completed all the subgoals and satisfied all the constraints successfully.
For example:
- **When execution success rate = 0:** The generated code has syntax error, or if it violates the physical constraint in the simulator (e.g. trying to pick up item A from table B even though table B is empty).
- **When unit-test pass rate = 0:** Consider a task to make a burger with a patty and lettuce. Not known to our pipeline, the user wants to prepare all the ingredients before assembling the burger, and they also want the lettuce to be on top of the patty. Even if a code makes a burger without having any error in the simulator (thereby having an execution success rate of 1), if it cooks the patty then immediately adds it to the burger instead of cutting the lettuce first, or if it puts the patty on top of the lettuce, the code will fail the unit test.
#### **Q4: Why is matching Spec2Code a necessary metric?**
Spec2Code assumes access to the latent task specification, so it knows exactly how to complete the task. We agree with the reviewer that matching Spec2Code isn't a necessary metric, as there are many possible ways to solve the task while satisfying the user's hidden requirements.
However, we empirically found this metric to be a useful proxy metric of the quality of a code without having to actually run the code.
- A high match rate indicates that the code is likely to succeed in execution and pass the unit test.
- However, a low match rate doesn't imply the code is wrong. Hence, we rely on the execution success rate and unit-test pass rate to evaluate the correctness of the code.
#### **Q5: Is hallucination a problem?**
For our experiments, we didn't encounter explicit issues with hallucination although LLMs like GPT-3.5 do hallucinate at times [1]. We are able to avoid hallucination because when we show examples of how to summarize demonstrations, we make sure that the LLM would cite parts of the demonstrations that make up the summarized line.
However, we do run into issues with the LLMs making mistakes during summarization, which consequently causes the task specification to be incorrect.
- e.g. the LLMs can accidentally omit a high-level subtask like stacking a patty on top of the lettuce, thereby causing the specification to also miss this subtask.
This limitation shows the value of future work to add a verification and improvement step so that our approach can catch errors it makes and improve its code [2-5].
#### **Q6: What is the reference code and the pass rate for the EPIC-KITCHENS dataset?**
The reference code for the EPIC-KITCHENS dataset is written by a human annotator after watching the demonstrations, and reviewed by another human to avoid inconsistency. Then, the LLM's generated code is compared against this reference code to compute the BLEU score (match score).
The pass rate is similar to the unit-test pass rate in the tabletop environment and Robotouille. However, because we don't have access to a simulator to run the generated code, we rely on human annotators to check if the demonstrations fit as a trace of the generated code.
#### **Q7: Is stochasticity a concern?**
It is not a major concern because we use temperature = 0 for GPT-3.5, which keeps our generated output and the final code mostly deterministic.
#### **Q8: Have the authors tried the new GPT-3.5 model with longer contexts or GPT-4 model?**
While we didn't try the new GPT3.5 model with longer contexts or GPT-4, we hypothesize that the increased context length will help solve more complicated tasks with longer and larger amount of demonstrations.
The multimodal reasoning capabilities of models like GPT-4 (which came out in parallel to us running the experiments) can also be more robust to noisy demonstrations and better solve real-world tasks like the ones in the EPIC-KITCHENS's dataset.
[1] Mark Chen, et al. Evaluating large language models trained on code. arXiv:2107.03374, 2021.
[2] Marta Skreta, et al. Errors are useful prompts: Instruction guided task programming with verifier-assisted iterative prompting, 2023.
[3] Shreyas Sundara Raman, et al. Planning with large language models via corrective re-prompting, 2022.
[4] Debjit Paul, et al. Refiner: Reasoning feedback on intermediate representations, 2023.
[5] Noah Shinn, et al. Reflexion: Language agents with verbal reinforcement learning, 2023.
---
Rebuttal Comment 1.1:
Title: Thank you for your feedback
Comment: Hello! Thank you again for your feedback and questions! As the discussion period ends soon, please let us know if we can provide any additional clarifications or answers that would help you in your evaluation. | Summary: This paper presents demo2code, a framework that takes as input user's language instructions as well as demonstrations, and outputs synthesized code for completing the tasks. It first iteratively summarizes given demonstrations to a compact task specification, then reasons by incorporating user preferences etc, and lastly output expanded execution code. The method is evaluated on a range of tasks, including table top manipulation, a simple cooking simulator, and real-world epic kitchen dataset.
Strengths: - The framework is novel in that it proposes to summarize demonstrations and language instructions using LLM, which is then used for action generation via code synthesis.
- the recursive and hierachical way of summarizing demonstrations and generating code is reasonable
- the idea that using LLM to reason about user preferences makes sense
Weaknesses: - Upon reading the introduction i was excited to see how the approach is able to handle both instruction and demonstration: the latter usually comes in a visual space, but then i realize the authors made a big assumption that they can query the simulator to get state-based demonstration. This assumption presents a few issues: 1) such privileged information hinders the application in realistic settings. In fact, the author had to manually densely label the epic kitchen dataset, which leads to the question of how this can be used for real-world settings 2) even in simulation, it's not straightforawrd to obtain these state information. for example, as opposed to `op-top`, relations such as `in(obj, microwave)` is hard to obtain easily. Also, binary spatial relations loses dense geometric informations. 3) if access to step-by-step low-level state is assumed, and the LLM can summarize and generate step-by-step specifications, why not directly use task and motion planning(TAMP) to solve the task?
- how does the oracle spec2code works? does it use TAMP? if yes, what's the advantage of demo2code over it?
- the experiment which uses epickitchen, but need additional manual annotation, loses the point of evaluating on real-world dataset
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - the authors says the language instruction and demonstration shares a common latent space after summarizing, but i don't see any details on this. Did I miss something?
- also see weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - assumption on access to privileged low-level state and relations
- additional annotation on real-world data
- other limitations are discussed in the last section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer acknowledges our novelty in recursively summarizing demonstrations and hierarchically generating code. We would like to respond to the reviewer's helpful feedback and questions.
### Questions
#### **Q1: Do you have any detail on the latent task specification?**
In the paper's introduction section (line 30-31), we stated that "while demonstrations are long and code is complex, they both share a latent task specification that the user had in mind." Specifically, this task specification is a detailed language description of how the task should be completed. It is latent because we assume that the user does not directly provide it as the language instruction. Please refer to our global response for more details.
### Weakness
#### **Clarification of assuming access to state-based demonstrations**
We clarify the focus of our work in the global response. We envision that our pipeline will exist in conjunction with other perception modules that parse the environment and demonstrations into symbolic states. We make this assumption so that we can focus on our goal of generating robot task code from demonstrations. As a part of future work, we are integrating Demo2Code into an end-to-end robotics system.
#### **Clarification on code v.s. using TAMP**
We refer the reviewer to the global response on our justification of generating code. We chose code as a flexible closed-loop policy representation that can check conditionals, contain loops, and use existing utility functions. In contrast, TAMP produces an open loop plan that needs replanning if the environment changes.
However, note that once Demo2Code's pipeline is used to summarize the demonstrations into a task specification, the downstream policy can be modified to other types of policy representation, e.g. PDDL planner, TAMP.
- For example, we show the possibility of integrating Demo2Code with a symbolic planner LLM+P [1], a pipeline that uses LLMs to generate PDDL before calling a planner to solve the problem. Please refer to Figure 1 of the attached pdf for a qualitative example.
#### **Clarification of the oracle Spec2Code**
The oracle Spec2Code and our approach Demo2Code are different in the following way:
- Spec2Code assumes having access to the latent task specification, which is the user's hidden information on how they want the tasks to be completed.
- Demo2Code doesn't know the task specification. It needs to generate the specification by summarizing the input demonstrations.
However, the second stage of Spec2Code and Demo2Code are the same: given a task specification, generating the robot task code and recursively defining any helper function.
Spec2Code sets an upper bound for how good Demo2Code can be. Demo2Code attempts to generate a good enough task specification from demonstrations and language to achieve the same performance as Spec2Code.
#### **What is the advantage of using Demo2Code over Spec2Code?**
Using Spec2Code directly is cumbersome because it requires the human to give a very detailed description of how to do a task. In contrast, Demo2Code just needs the user to demonstrate the task. Thus, Demo2Code reduces the amount of effort that users need to spend in order to teach and interact with a robot.
#### **Clarification of running experiments with EPIC-KITCHENS data**
While we acknowledge relying on manual annotations for EPIC-KITCHEN is limiting, we argue that the experiments we provide are still quite valuable because:
- The EPIC-KITCHENS is a real-world dataset, analogous to how a typical user would provide demonstrations once Demo2Code is integrated into an end-to-end robotics system.
- The data covers a wide range of unique kitchens with different kitchenware and a diverse set of users who each has a preferred way of washing dishes.
- The manually annotated states and actions have a wide range of objects, predicates, and actions.
In our future work, we are working to integrate Demo2code with a vision language model that can automatically extract states and actions from the video data. In the meantime, our current experiments have verified that Demo2Code can extract different real-world users' unique preferences from the output of such perception models.
We also ran ablation studies where we added noisy predicates (e.g. positions of irrelevant objects in the background) that a perception module might automatically identify in real-world demonstrations. Table 1 in Section G of the appendix shows that noisy demonstrations can worsen LLM's performance. Thus, the EPIC-KITCHENS dataset emphasizes the importance of future works on using feedback to iteratively improve Demo2Code's output [2-5].
[1] Bo Liu, et al. LLM+P: Empowering large language models with optimal planning proficiency, 2023.
[2] Marta Skreta, et al. Errors are useful prompts: Instruction guided task programming with verifier-assisted iterative prompting, 2023.
[3] Shreyas Sundara Raman, et al. Planning with large language models via corrective re-prompting, 2022.
[4] Debjit Paul, et al. Refiner: Reasoning feedback on intermediate representations, 2023.
[5] Noah Shinn, et al. Reflexion: Language agents with verbal reinforcement learning, 2023.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response. I have increased my scores accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thank you for your detailed review, questions, and suggestions! | Summary: This paper proposes Demo2Code, a new method for generating code given a natural language description and demonstrations of the task. Demo2Code recursively summarizes demonstrations using a language model (LM) to create a task specification. The task specification is concatenated to the description and then recursively synthesized into code using a LM. Demo2Code is shown to outperform previous SoTA on an object manipulation environment, an author-designed cooking simulator, and the EPIC Kitchens dataset. Finally, qualitative results are shown demonstrating OOD generalization, grounding, and understanding of user preferences.
Strengths: *Originality:* a new long-range sequential decision-making benchmark, Robotouille, was developed that focuses more on high-level actions rather than manipulation and navigation. This may be useful for future work studying agents that learn from task specification.
*Quality:* the experiments are comprehensive and cover a range of tasks and domains, suggesting that the method is robust to distributions of tasks. Furthermore, the results show a marked improvement over the previous SoTA on all the tasks, which likely demonstrates the efficacy of leveraging demonstrations.
*Clarity:* the paper is well-written and the figures are self-explanatory.
*Significance:* generating code from demonstrations is an important step towards developing agents that can efficiently interact with humans. Moreover, this work demonstrates a working end-to-end setup from natural language demonstrations to code, likely encouraging future work in this area.
Weaknesses: There are two implicit assumptions of the method that are not evaluated and may make the method difficult to use in practice. I believe that these assumptions may be hard to overcome, so I am leaning towards giving a 6.5 but am rounding up because of the high-quality execution.
1. *The method assumes that demonstrations are complete descriptions of each state in the trajectory.* I would imagine that in many real-world environments (which is the setting that such a method would likely be deployed in), obtaining a complete description of each state in the trajectory is noisy. Some actions may be occluded or unable to be clearly delineated. Moreover, it may not even be clear a priori how to canonically parse actions from video or a natural language description of the environment. Would it be possible to develop an ablation where some of the actions are noisily parsed or even entirely omitted from the demonstration? I suspect that GPT-3.5 may not be able to handle the perturbation, but GPT-4 may be able to.
2. *The method assumes that the task description is well-specified.* Humans can often provide demonstrations and descriptions of the task that are under-specified or misspecified. For example, when booking airline tickets, a human may forget to describe their preference for red-eye flights and there may not be enough demonstrations to determine their underlying preferences [1]. In its current form, the method appears unequipped to handle such cases and it is unclear what solution it would generate. As mentioned in the work, one possible remedy is to provide feedback to the LM, but it is unclear how successful such an approach would be.
[1] Lin, J., Fried, D., Klein, D., & Dragan, A. (2022). Inferring Rewards from Language in Context. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 8546–8560). Association for Computational Linguistics.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions:
* Is there intuition for why the pass performance of DemoNoLang2Code is greater than the pass performance of Demo2Code on the "Make a burger stacking lettuce atop patty immediately" and the "Make two burgers stacking patty atop lettuce after preparation" tasks in Table 2?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: In general the limitations are well-addressed. The following two limitations were also addressed as weaknesses above:
* Demonstrations are complete descriptions of low-level actions. In practice it seems difficult to obtain complete descriptions of all actions and state of an environment, as actions are often noisy and difficult to clearly delineate. E.g., given some video of a human washing the dishes, it would be unclear how to appropriately parse the action space (in the paper it was done by hand).
* The natural language description of the task may be misspecified by the human. This seems like a fundamental limitation of the current method, and might make the generated code incorrect or misspecified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer's enthusiasm for our cooking game Robotouille and how our approach shows improvement over a wide range of tasks and domains. We also appreciate the reviewer's feedback on our assumptions. We would like to address the questions and concerns raised:
### Questions
#### **Q1: Why is the unit pass rate of DemoNoLang2Code sometimes greater than Demo2Code?**
The two instances in Table 2 of the paper where DemoNoLang2Code performed better than Demo2Code are due to errors made by the summarization module in Demo2Code. For example, when Demo2Code summarizes the demonstrations for the "make two burgers stacking patty atop lettuce after preparation" task, the LLM accidentally makes a mistake and misses the high-level subtask of stacking the patty on top of the lettuce. Consequently, the task specification also misses this subtask. We are working on better verification techniques to catch such mistakes and improve code generation.
### Weakness
#### **Clarification of assuming access to state-based demonstrations**
We refer the reviewer to the global response where we clarified the scope. We envision that our pipeline will exist in conjunction with other perception modules that parse the environment and demonstrations into symbolic states. As a part of future work, we are integrating Demo2Code into an end-to-end robotics system.
#### **Ablations where some states are noisily parsed or entirely omitted?**
Please refer to the appendix (section G Table 1), where we have run an ablation study with the EPIC-KITCHENS dataset.
- We try adding noisy, distracting predicates (e.g. showing the position of additional objects in the scene) to at least two separate states in the demonstrations to confuse the LLM.
- We find that Demo2Code's performance suffers from degradation.
- Specifically, it originally can correctly extract 5 out of 7 users' preferences, but with noisy states, it can only correctly solve for 3 users.
We also present new noisy state ablation experiments for Robotouille in the attached pdf (Table 1).
- Specifically, for a demonstration, we study the effect of randomly removing 10% of the predicates and the effect of randomly removing 10% of states completely.
- We find that missing 10% of the predicates in demonstrations only slightly worsens the performance (from 0.465 to 0.42).
- Meanwhile, because removing 10% of the states essentially removed more than 10% of the predicates, it has more greatly affected the performance (from 0.465 to 0.327).
Table 1 in the attached pdf contains detailed results for individual Robotouille tasks.
#### **Clarification of using language description and demonstrations as input**
We agree with the reviewer that language descriptions of a task can be under-specified or misspecified. We offer the following two arguments.
1. Demonstrations may capture complementary information that language description omits. Even if demonstrations and language are both noisy, as long as one modality captures the user’s preference, Demo2Code can recover it in the specification. Using the airline ticket booking examples, if a user shows a demonstration where a red-eye flight is picked over other ones that are cheaper, our pipeline can extract that preference from the demonstrations even if the user never explicitly states that they prefer red-eye flights.
2. In the event that both demonstrations and language miss such information, we would need feedback from the user to add what's missing. Recent works [1-4] have shown such feedback schemes are able to correct LLMs' output successfully. We should be able to extend Demo2Code to add such feedback, and we will explore this in future work.
[1] Marta Skreta, et al. Errors are useful prompts: Instruction guided task programming with verifier-assisted iterative prompting, 2023.
[2] Shreyas Sundara Raman, et al. Planning with large language models via corrective re-prompting, 2022.
[3] Debjit Paul, et al. Refiner: Reasoning feedback on intermediate representations, 2023.
[4] Noah Shinn, et al. Reflexion: Language agents with verbal reinforcement learning, 2023.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarification
Comment: . | Summary: The authors propose an LLM-based completion framework to translate natural language instructions, in addition to transcribed state sequences of demonstrations (as PDDL (or other strips-like) predicates), into code for executing the task with a robot.
The method is based on recursively summarizing the demonstrations into a 'specification', and then recursively expanding the 'specification' into python(?) code. The recusrsion stops in the first phase when no further summarization is possible, and in the code expansion loop, when no function is undefined.
The authors demonstrate this is three different domains with upto 10 distinct high level actions occuring in each demonstration.
Strengths: Multimodal learning from demonstrations plus language instructions is an important problem that will enable consuming a wider modality of data sources to generate robot/agent programs and behavior. Using the summarization/translation capabilities of LLMs will likely be a key component of such a system.
Weaknesses: **Methodology**: The paper does not define what a task specification is concretely, nor does it define what it means for the demonstrations to be adequately summarized, in particular in Alg 1 the function is_summarized() is not defined, nor is its operationalization defined anywhere in the text of the paper. Specification traditionally refers to a formal statement whose semantics are well known and the satisfaction of the specification with respect to an output is computationally well-defined and consistent. By not having a concrete definition of specification, the evaluations set up in the latter part of the paper suffer from lack of diversity and a lack of quantification of task difficulty.
**Code as output language**: This work relies on the output code being interpretable, and this involves providing a set of primitive parametric functions to the LLM to use. The part of sensing the environment, and acting using perception feedback is something that is already programmed into the primitives. Coming up with an adequate and a competent set of parameterized primitives is challenging, and in this case is entirely the responsibility of the system designer.
**Claims of generalization** Tthe generalization shown is only generalizing towards named entity substitutions, not to complex control flows and temporal specifications. Specifically, all the tasks shown here are a sequence of subgoals where performing any subgoal out of order will not preclude the agent from completing the subsequent actions. Further, there is no reactivity in the task specifications, and there are no avoidance tasks. All of these task specifications notions are very common in robotics and planning problems that this system has not been evaluated on. Refer to [1] for a survey on robotics mission types (also relevant to symbolic planning). All the tasks here are limited to the visit or sequenced visit type. Further while the submission claims that demo2code can generalize for complex long-horizon tasks, the maximum task length is quite smaller than state of the art for symbolic planning. Automatically translating textual domain descriptions to a formal domain description followed by the use of automated planners has already shown more reliable performance on harder problems [5]. Further there is quite a bit of evidence that LLMs cannot plan beyond the simplest of domains [6], and this line of research is unacknowledged in the submission.
**Issues on evaluations of learning from demonstrations:** Generalization from demonstrations and language is a tricky subject. Usually demonstrations and language contain complementary sources of information. Therefore none of the system behaviors are incorrect in Figure 5, the core issue is that inductive learning is by definition an ill-posed problem, and many approaches to inductive learning have relied on Bayesian inference in the past. [2],[3]. Specifically, the issue of where to place the purple block (fig 5a) given the language description is underspecified, and the system had to forcibly ground the placement to any of the valid options to generate a trajectory. Committing to a valid assignment as was done by the lang2code model is one approach, and asking for resolution of referential ambiguity is another approach [4]. One might argue that learning to overconstrain the output based on a single demonstration is an example of overspecification.
[1] - Menghi, C., Tsigkanos, C., Pelliccione, P., Ghezzi, C. and Berger, T., 2019. Specification patterns for robotic missions. IEEE Transactions on Software Engineering, 47(10), pp.2208-2224.
[2] - Tenenbaum, J.B., 1999. A Bayesian framework for concept learning (Doctoral dissertation, Massachusetts Institute of Technology).
[3] - Shah, A., Kamath, P., Shah, J.A. and Li, S., 2018. Bayesian inference of temporal task specifications from demonstrations. Advances in Neural Information Processing Systems, 31.
[4] - Williams, Tom, Rafael C. Núñez, Gordon Briggs, Matthias Scheutz, Kamal Premaratne, and Manohar N. Murthi. "A dempster-shafer theoretic approach to understanding indirect speech acts." In Advances in Artificial Intelligence--IBERAMIA 2014: 14th Ibero-American Conference on AI, Santiago de Chile, Chile, November 24-27, 2014, Proceedings 14, pp. 141-153. Springer International Publishing, 2014.
[5] - Liu, B., Jiang, Y., Zhang, X., Liu, Q., Zhang, S., Biswas, J. and Stone, P., 2023. Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477.
[6] - Valmeekam, K., Olmo, A., Sreedharan, S. and Kambhampati, S., 2022. Large Language Models Still Can't Plan (A Benchmark for LLMs on Planning and Reasoning about Change). arXiv preprint arXiv:2206.10498.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: To strengthen the paper the authors should carefully define the following:
1. The role of the user/system developer. This includes definition of skill primitives that can be executed in the environment, definition of predicates that are sufficient to track the progress of the tasks. Developing perception systems that accurates record the predicate states
2. Quantify task complexity using metrics from logic (expressing instructions in temporal logic, and measuring the size of the automaton). Using number of predicates, and actions and reporting them for each planning domain.
3. Evaluate on diverse set of instructions taking inspiration from Menghi et al. [1] to come up with specification templates beyond sequenced visit.
4. Report comparative performance against state-of-the-art symbolic planners. in comparably sized planning domains.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable feedback on how we can strengthen our work. We also thank the reviewer for seeing our approach's capability to summarize and handle learning from demonstrations!
Please find below our responses to the questions and concerns:
## Questions
#### **Q1:**
We assume that a system developer does the following:
- Provides a perception library. This library defines a set of predicates, similar to [8-9], and maintains this set based on sensor observations. It also contains helper functions that use these predicates, e.g. get_obj_location(obj), is_cooked(obj).
- Provides an action library. This library defines a set of actions that correspond to low-level policies, similar to [1-7].
These are common modules in a robotics stack and seem reasonable to assume. Given such a system, Demo2Code can take demonstrations from any user (not a system developer) to generate robot code that uses functions from these libraries. We list the set of low-level actions and predicate-based functions for each planning domain in Table 3 of the attached pdf.
#### **Q2:**
We present the number of unique predicates and low-level actions for each planning domain in Table 2 of the attached pdf. We also cluster the tasks based on their type to report the demonstration length and code complexity (by code length, average number of loops, average number of conditionals, and average number of functions defined) in Table 4.
#### **Q3:**
We clarify our rationale for our selection of tasks, which has a different motivation than Menghi et al. [10] Since we focus on learning tasks from demonstrations rather than task planning efficiency, we chose our tasks to test the following:
- **Recovering hidden world constraints:** The tabletop tasks require identifying constraints not specified in language but implicit in the demonstrations.
- **Inferring conditionals and control flows:** The Robotuille tasks require identifying conditionals (e.g. picking up the object if not holding it) and loops (e.g. cutting an object until it's cut, using for loops to cook multiple burgers, etc).
- **Extract user preferences:** The Epic Kitchen tasks require identifying the user’s preference, e.g. different ways to wash dishes.
#### **Q4:**
We adapted our pipeline to run LLM with a symbolic planner, LLM+P [11]. Since LLM+P does not have a recursive summarization pipeline to handle long demonstrations, we provide the task specification generated by Demo2Code.
We made these observations:
- LLM+P fails to capture user preferences if they are not observable in the goal state. Fig. 1 in the attached pdf shows a qualitative example. LLM+P produces the left plan which has a lower cost, but misses the preference that the user wants the robot to prepare all ingredients before assembling them into a burger because the order of subtasks is not captured in the goal state.
- LLM+P needs to be called every time the initial condition changes. In contrast, Demo2Code needs to generate the code once, which generalizes for different initial conditions.
## Weakness
### Methodology
In our work, the task specification is described in language rather than a formal structure. We empirically found the following format to work reliably.
- Header: 1-2 sentences that define the overall goal to ground the code generation step.
- Body: descriptions in pseudocode format to help the LLM generate the high-level task code more easily.
A demonstration is sufficiently summarized when it has been distilled down to a task specification that can “explain away the demos”, i.e. P(code | spec) = P(code | spec, demo).
In our implementation, we leverage the LLM to rely on instructions and examples in the prompt to determine whether a demo is sufficiently summarized.
### Claims of generalization
Please refer to the global response where we clarify:
- The difference between the train and test tasks
- The code complexity of the different test tasks
We show that Demo2Code can generalize to tasks that have longer horizons, more states, and more control flows than examples in the prompt. Our long-horizon claims refer to the result that Demo2Code can solve longer tasks (up to 114 states) compared to existing LLM-based planners [1-7] (up to 20 states).
#### **On the planning ability of LLMs**
Please refer to the "Code vs other policy representation" section where we clarify that we focus on using LLMs to generate robot task codes from demonstrations rather than LLMs' planning abilities. We acknowledge the importance of symbolic planners and their complementary strengths to our summarization framework.
### Issues on evaluations of learning from demonstrations
We evaluate solutions in a manner similar to how imitation learning evaluates policies:
- When we generate demonstrations, we also generate held-out reward functions, captured as unit tests, that check for satisfying a set of sub-goals and constraints.
- We use these held-out unit tests to evaluate the code.
[1] Jacky Liang, et al. Code as policies: Language model programs for embodied control
[2] Ishika Singh, et al. Progprompt: Generating situated robot task plans using large language models
[3] Jimmy Wu, et al. Tidybot: Personalized robot assistance with large language models
[4] Andy Zeng, et al. Socratic models: Composing zero-shot multimodal reasoning with language
[5] Brian Ichter, et al. Do as i can, not as i say: Grounding language in robotic affordances
[6] Wenlong Huang, et al. Inner monologue: Embodied reasoning through planning with language models
[7] Kevin Lin, et al. Text2motion: From natural language instructions to feasible plans
[8] Toki Migimatsu, et al. Grounding predicates through actions
[9] Kei Kase, et al. Transferable task execution from pixels through deep planning domain learning
[10] Claudio Menghi, et al. Specification patterns for robotic missions
[11] Bo Liu, et al. LLM+P: Empowering large language models with optimal planning proficiency
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: **User-defined modules**: I appreciate this reflection, and it would be a valuable addition to the paper to better define the user's/systems designer's role
**Role of specification patterns**: I believe the authors misunderstood the intent of comparing selected tasks to Menghi's [ref-1 mine] framework. Menghi's framework is a taxonomy of temporal properties present in robotics tasks, and the authors should select tasks that fall into various categories from the hierarchy, and the selected tasks only fall under a few of the temporal properties.
I also appreciate the author's efforts to characterize the complexity of the tasks that the system was tested on.
**Definition of summarization**: I appreciate the clarification, and would like the authors to clarify if the computation of whether the demonstration is summarized actually leverages the 'explained away' criterion as mentioned (using logits of the LLM), or if it is asking an LLM to evaluate whether the instruction is adequately summarized?
Thus I feel the authors have reiterated their position more clearly in the response, the central issue with the lack of diversity of temporal properties in tasks learned from demonstration still remains. As it stands I plan to retain my current score
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback (1/2)
Comment: ## Question
**Does the computation of whether the demonstration is summarized actually leverage the 'explained away' criterion as mentioned (using logits of the LLM), or does it ask an LLM to evaluate whether the instruction is adequately summarized?**
- At test time, the LLM is asked to evaluate if a trajectory is adequately summarized.
- However, when we design the summarization prompt, we validate the prompt by using tasks with short demonstrations to check if concatenating those demonstrations to the beginning of the generated specification changes the code.
- If specification alone and specification with demonstrations cause the LLM to generate the same code, we can approximately show that the prompt satisfies the criteria that P(code | spec) = P(code | spec, demo).
- Thus, our prompt validation approach essentially examines the arg max of the logits, which is the code, instead of the exact logits whose values may vary.
## Clarification on specification pattern
We would like to emphasize that since we are predicting code as output, we must measure the complexity of tasks in terms of their code complexity. We categorize our tasks under a taxonomy of the number of loops, conditionals, functions, code length, and horizon lengths (see Table 4 in the attached pdf). Our tasks do coverage on these different axes. This is also consistent with how prior works [1-4] predicting task code have characterized their set of tasks.
The challenge with using a different taxonomy, e.g. Menghi et al.[5], is a potential mismatch in categories in the taxonomy v.s. varying levels of complexity for the code generation model. Notably, two different categories can result in very similar code. For example:
- **Avoidance category:** Don’t stack blocks above a certain height results in "while stack_height < X"
- **Trigger category:** Wait till patty is cooked results in "while not is_cooked(patty)"
- Both are simply while() loops with different conditions.
However, we are happy to characterize the current tasks we have in Menghi et al.[5]'s taxonomy as the reviewer requested. We also introduce 3 new tasks to increase the diversity in the new taxonomy. Please see the table below:
| Planning Domain | Task Name | global avoidance (Avoidance) | upper restriction avoidance (Avoidance) | lower/exact restriction avoidance (Avoidance) | wait (Trigger) | instantaneous reaction (Trigger) | delayed reaction (Trigger) | patrolling (Surveillance) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Tabletop | Place A on top of B | | | | | X | | |
| | Stack all blocks/cylinders | | X | | | | | |
| | Stack all objects into two stacks | X | | | | | | |
| Robotouille | Cook and cut | | | X | X | X | | |
| | Cook two patties | | | | X | X | | |
| | Cut two lettuces | | | X | | | | |
| | Assemble two burgers | | | | | | | |
| | Make a burger | X | | X | X | X | X | |
| | Make two burgers | X | | X | X | X | X | |
| | **Keep making burgers (new)** | **X** | | **X** | **X** | **X** | **X** | **X** |
| | **Keep assembling burgers (new)** | **X** | | | | | **X** | **X** |
| | **Cooking multiple patties simultaneously (new)** | | | | **X** | **X** | | **X** |
| EPIC-KITCHENS | Washing dishes | | | X | | X | | |
For the new "keep making burgers" and "keep assembling burgers" clusters of tasks, we report Demo2Code's execution success rate and unit test pass rate below:
| Task Cluster | Requirements | Execution Success Rate | Unit Test Success Rate |
|:---:|:---:|:---:|:---:|
| Keep making burgers | stacking lettuce atop patty immediately | 0 | 0 |
| | stacking patty atop lettuce immediately | 1 | 0 |
| | stacking lettuce atop patty after preparation | 0 | 0 |
| | stacking patty atop lettuce after preparation | 0 | 0 |
| | substitute lettuce with cheese | 1 | 1 |
| | substitute patty with chicken | 1 | 1 |
| | add tomato | 1 | 1 |
| Keep assembling burgers | none | 1 | 1 |
| | do the tasks in parallel | 1 | 0 |
| Overall | | 0.67 | 0.44 |
[1] Jacky Liang, et al. Code as policies: Language model programs for embodied control
[2] Ishika Singh, et al. Progprompt: Generating situated robot task plans using large language models
[3] Jimmy Wu, et al. Tidybot: Personalized robot assistance with large language models
[4] Andy Zeng, et al. Socratic models: Composing zero-shot multimodal reasoning with language | Rebuttal 1:
Rebuttal: We thank all reviewers for their time, energy, and helpful feedback! We are excited to see that reviewers view the problem as important and challenging *(Reviewer sxrs, ujZb, 325h, yYo2)*, find our LLM summarization framework to be novel *(325h)* and important *(sxrs, yYo2)*, and view Robotouille as a new benchmark for high-level task planning *(ujZb, yYo2)*. We are also pleased to see that reviewers find the paper well written *(Gc7K, ujZb)* and figures easy to understand *(ujZb, yYo2)*.
**Please find an attached pdf with new ablation studies, baseline comparison, and experiment details.**
### Clarification of scope
We build on prior works [1-4] that leverage LLMs to generate robot code from detailed language instructions. In contrast, Demo2Code generates robot task code from state-based demonstrations. People often cannot provide detailed language instructions and tend to underspecify. With Demo2Code, we show that even with little or no language input, we are competitive with prior works [1] that have access to detailed language instructions.
### Access to state-based demonstrations
Most robotic systems have perception modules [5-6] that can parse raw sensor data into predicate states. We think assuming access to state-based demonstrations is reasonable so that we can focus on generating code, which is nontrivial given how long demonstrations can get. As future work, we are working on integrating Demo2Code into an end-to-end robotic system.
We also conducted several ablations:
- Table 1 in Section G of the appendix shows ablation where we add noisy predicates to the EPIC-KITCHENS demonstrations. We find that the LLM's performance worsens and goes from correctly extracting 5 users' preferences to 3.
- We also conduct new studies on Robotouille where 10% of the predicates or 10% of the states are randomly removed. Missing 10% of the predicates only slightly worsens the performance (0.462 to 0.42), and missing 10% of the states (which could remove more than 10% of predicates) have a larger effect (0.462 to 0.327). More details are in Table 1 of the attached pdf.
### Why code?
We choose code as the output representation because:
- Much of high-level robot tasks are programmed as code.
- Code offers a flexible, concise way of expressing control flows, calling external libraries like perception and planning, etc.
- Code allows composability.
- Code is interpretable to engineers and verifiable through static analysis.
- LLMs have been trained on code and can generalize well in this output space [8-9].
### Code v.s. other policy representation
We don't claim our approach is an alternative to a symbolic planner. In fact, many problems such as re-arrangement tasks are more suited to be solved as a planning problem. For such cases, we can modify Demo2Code to summarize demonstrations to a task specification for a (PDDL) planner.
We experiment combining Demo2Code with LLM+P [7] and present a qualitative example in Figure 1 of the attached pdf. We find that while this combination produces valid plans:
- LLM+P fails to capture user preferences that aren’t observable in the final state.
- LLM+P needs to be run for every new initial condition, while the code generalizes to different environments.
Still, both code and PDDL are valid ways to represent a policy, and they can even co-exist, e.g. the code can call a planner. However, for the reasons stated in "Why code?", we chose to generate code.
### Clarification of generalization results
We clarify how Demo2Code can generalize and solve unseen complex tasks with longer horizons and more predicates compared to examples in the prompt at train time.
For Robotouille, the tasks at train time are significantly different from the ones at test time:
- Mean horizon length: 11 states (train) v.s. 32 states (test)
- Mean number of predicates: 15 predicates (train) v.s. 53 predicates (test)
Compared to baselines, Demo2Code performs the best for long burger-making tasks (an average of 26 states and 43 predicates) even though the prompt doesn't show this type of task. Our pipeline also shows its ability to identify control loops, e.g. solving a task to make two burgers with patties, lettuce, and tomatoes (which has 71 states and 115 predicates). Demo2Code used a for-loop to make two burgers, generalized to unseen subtasks (e.g. cut tomatoes), and composed 7 distinct subtasks together to make one burger.
We also test the code against various initial conditions procedurally generated by the Robotouille simulator, where the robot starts in a new position, items are in novel arrangements, etc. These environments check the code against corner cases, e.g. when a key item is in a stack, the robot should unstack the item instead of directly picking it up.
### Clarification of the definition of "latent task specification"
We wish to clarify the sentence: “Our key observation is that the input (demonstrations) and the output (code) share a latent task specification”. We use the term “latent” to evoke the analogy with an encoder-decoder architecture (encoder being the summarizer and decoder being the code expansion). "Task specification” refers to the detailed language description of how the task should be completed.
[1] Jacky Liang, et al. Code as policies: Language model programs for embodied control
[2] Ishika Singh, et al. Progprompt: Generating situated robot task plans using large language models
[3] Jimmy Wu, et al. Tidybot: Personalized robot assistance with large language models
[4] Andy Zeng, et al. Socratic models: Composing zero-shot multimodal reasoning with language
[5] Toki Migimatsu, et al. Grounding predicates through actions
[6] Kei Kase, et al. Transferable task execution from pixels through deep planning domain learning
[7] Bo Liu, et al. LLM+P: Empowering large language models with optimal planning proficiency
[8] Mark Chen, et al. Evaluating large language models trained on code
[9] OpenAI. Gpt-4 technical report
Pdf: /pdf/dd748050aa8d73a825589a33a9e3193b27758dba.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a method that can take both demo and language in and teach LLM to perform new tasks. The idea makes sense and the algorithm is easy to understand and works very well. Evaluation results and ablations show improvement over existing works.
Strengths: The paper is well written and the idea is clear and easy to understand.
Weaknesses: It's probably better to define what "task" is. For example, the authors claim "Demo2Code can generalize across complex, long-horizon tasks." However, for real world robotics tasks, I would imagine there will be a lot of corner cases and the code needs to handle it and therefore it won't generalize to those tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It looks like the method does not involve any network training / fine tuning, right?
If so, do you expect to get better performance by training / fine tuning LLM?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As the author said, the ability of this framework is limited by the capability of LLM.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer is excited about the capability of our approach and finds our paper to be clear and easy to understand. We also thank the reviewer for suggestions to make our paper clearer. We would like to answer the questions and then address the concerns.
### Questions
#### **Q1: It looks like the method does not involve any network training / fine-tuning, right?**
Correct, we are currently doing in-context learning, where we provide the LLM example query and responses [8].
We do expect fine-tuning and updating the weights of the LLMs to increase the performance. However, fine-tuning large models is challenging because:
- it requires access to the parameters
- the process is computationally intensive
- it requires a large amount of data due to the large number of parameters LLMs have
Moreover, open-source models currently do not match the code-generating abilities of GPT3.5 [9].
Hence, in-context learning with GPT-3.5 allows quick reusability across different environments without needing large amounts of data and computational power.
### Weakness
#### **Clarification on tasks**
We define a task as a goal that contains a series of sub-goals, which correspond to sub-tasks that the robot needs to complete. For example, the make-a-burger tasks contain sub-goals on the item's status (e.g. the lettuce needs to be cut, the lettuce must be on top of the patty) and on the order of sub-tasks to execute (e.g. preparing all the ingredients before assembling the burger).
#### **Clarification on generalization ability to complex, long-horizon tasks**
In the global response, we clarify how Demo2Code is able to generalize to tasks that are more complex and have a longer horizon than the tasks in the prompt, e.g. for Robotouille, an average of 11 states and 15 predicates at train time v.s. 32 states and 53 predicates at test time.
In addition, Demo2Code is able to solve tasks with longer horizons (up to 71 states) compared to other existing LLM-based planners [1-7], which have an average of around 20 states. These planners mostly solve tasks that require chaining multiple low-level actions. In contrast, the tasks that Demo2Code can resolve require chaining multiple high-level subtasks, which each contains multiple low-level actions with loops and conditionals.
#### **Clarification on how the code handles corner cases**
We acknowledge the importance of ensuring that the generated policy can handle corner cases. That is why we developed Robotouille, which procedurally generates environments with different initial conditions to test our code in. This variability includes:
- the robot starts in a new, unseen position,
- items are in novel arrangements,
- key items exist among other irrelevant items, etc.
As future work, we are working on integrating Demo2Code into an end-to-end robotic system that would surface more realistic corner cases.
[1] Jacky Liang, et al. Code as policies: Language model programs for embodied control. arXiv preprint arXiv:2209.07753, 2022.
[2] Ishika Singh, et al. Progprompt: Generating situated robot task plans using large language models, 2022.
[3] Jimmy Wu, et al. Tidybot: Personalized robot assistance with large language models, 2023.
[4] Andy Zeng, et al. Socratic models: Composing zero-shot multimodal reasoning with language. arXiv:2204.00598, 2022.
[5] Brian Ichter, et al. Do as i can, not as i say: Grounding language in robotic affordances. In 6th Annual Conference on Robot Learning, 2022.
[6] Wenlong Huang, et al. Inner monologue: Embodied reasoning through planning with language models. In arXiv:2207.05608, 2022.
[7] Kevin Lin, et al. Text2motion: From natural language instructions to feasible plans, 2023.
[8] Mark Chen, et al. Evaluating large language models trained on code. arXiv:2107.03374, 2021.
[9] Hugo Touvron, et al. Llama: Open and efficient foundation language models, 2023.
---
Rebuttal Comment 1.1:
Title: Thank you for your feedback
Comment: Hello! Thank you again for your review and questions! Since the discussion period is coming to a close, please let us know if any additional clarifications would be helpful in your evaluation. | null | null | null | null | null | null |
UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale Scene | Accept (poster) | Summary: The paper proposed a method for large-scale scene reconstruction and rendering.
It utilizes NeRF to learn a mesh representation for the scene by optimizing the
vertex positions and neural features/MLPs using standard volume rendering.
To handle large-scale scenes, the method divides the scene into multiple blocks
and trains a NeRF for each block. The initial mesh within each grid is got
by creating an octahedron at the center of each voxel grid. The paper also learns
a hierarchical representation with Level-of-Details to support efficient
rendering viewed from different distances. The paper shows that such a representation
can be integrated with Unreal Engine for interactive rendering and scene composition.
The paper shows results on outdoor scenes captured by drones and makes comparisons
against baselines such as instant-ngp, urban-NeRF, and NeRF-w. The experiment results
show that the proposed method achieves higher accuracy in rendering and faster rendering
speed.
Strengths: 1. The pipeline of optimizing a mesh representation with volume rendering for large-scale
scene reconstruction is technically sound, and the paper shows that the proposed pipeline
achieves better accuracy than the baseline method in terms of both accuracy and rendering speed.
2. The paper introduces multiple techniques to increase the rendering efficiency and quality
of the reconstructions, such as using an alpha threshold to make the mesh close to the real
surface, using pseudo-depth to remove floaters, and using pre-rendering to filter out
triangles that have small contributions.
Weaknesses: See Questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. The overall pipeline of the paper is similar to that of MobileNeRF. The paper says in Line 100
that "in this work, we accelerate the rendering by ... into a mesh representation for efficient
mesh rasterization pipeline". How is this different from MobileNeRF?
While I agree that the proposed method is faster than MobileNeRF in terms of training, I think
it's the main benefit of using hash grid instead of MLP. In terms of representations, I think
the proposed method is similar to MobileNeRF.
In terms of supporting large-scale inputs, it seems that the major change is separating the scene into blocks and doing blockwise
training. Can MobileNeRF be adopted for this task if we train it for each block? I think the paper
should make it more clear the difference of the proposed representation.
2. In Line 137, I think it should be "a regular octahedron with **8** faces". The paper says the
MobileNeRF has slow and unstable convergence issues. It's not clear to me why using octahedron
can help resolve this issue. What are the benefits of using it? It's needed to add a validation
study and comparisons to MobileNeRF. It will give the readers more insight if the paper
involves a comparison to MobileNeRF on standard benchmarks such as NeRF synthetic and LLFF.
3. The paper mentions having overlaps between blocks. How are the reconstructions handled at
overlapped regions? More details should be provided in this part.
4. In Equation 5, how is the threshold 0.3 chosen? How does the performance of the proposed method
change with different values of this threshold?
5. In Line 222, the paper says "filter out meshes that have an intersection opacity less 0.3". Is
this opacity only calculated at a single intersection point? I am wondering whether this will
cause problems in regions where there are sharp edges with abrupt opacity changes.
6. I think the title of the paper is kind of misleading. The reconstructed models are compatible
with standard rasterization engines using customized shaders, and it's not specific to UE4. Integrating
into UE4 is also something trivial given the underlying polygonal meshes. I would suggest the authors
modify the short name to more focus on large-scale reconstructions.
Overall, I think the proposed method is technically sound, and the presented results are convincing.
My major concern is that the backbone used in the paper is similar to MobileNeRF and the paper is combining
it with blockwise reconstructions (which is explored in previous large-scale NeRF works),
which prevents me from giving a higher rating.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations look good to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: A1: Thanks for your comments. Actually, our proposed method is different work from Mobile-NeRF. **For a detailed comparison, please refer to Author Rebuttal.** In our test, using NGP in the same block requires 70,000 epochs to achieve results close to our method. Mobile-Nerf training is divided into three phases, the first two of which involve the training of the model, which requires 300,000 epochs for the first phase and 500,000 epochs for the second phase. Assuming we use a hash mesh instead of mobile-nerf's backbone, consider that neural network fitting opacity (high frequency information) requires more training than fitting volume density and requires additional mesh optimization. Then we can roughly estimate that the first stage needs at least 70,000 epochs to achieve the effect close to our method. Based on the highest speedup achieved in the first phase of NGP, the replaced mobile nerf backbone network needs at least 120,000 epochs to complete the second phase of training, and then in the third phase, Mobile nerf exports precomputed feature maps and grids. Our method only needs a total of 80,000 epochs to complete the training and export the UV coordinates of the vertices and mesh.
**In the Author Rebuttal, we provided a comprehensive comparison between UE4-NeRF and Mobile-Nerf.** Based on our previous experiments, training a block with Mobile-NeRF takes around two days on 4 * 3090Ti GPUs. Our focus is on modeling large scenes, and if we were to model a scene as described in the manuscript, it would require around two months. Additionally, Mobile-NeRF demands significant GPU memory during training, and the generated faces and features necessitate substantial storage. It's not feasible to load such a large number of models for real-time rendering.
A2: A regular octahedron consists of a total of 20 faces, which includes the 12 interior faces. We attempted Mobile-NeRF's grid layout (initially without tilted surfaces), but found that it required more training epochs to achieve a fitting result on tilted roofs (An additional 20,000 epochs are needed compared to 80,000 epochs). Moreover, the rasterization artifacts were noticeable in certain shrub areas (it took an additional 50,000 epochs to significantly reduce the visible artifacts). On the other hand, our octahedral grid handles more complex face-to-face intersections. The average distance from any point in space to the nearest triangular face is smaller than in Mobile-NeRF's grid layout. This results in faster convergence in the vertex optimization part of our approach.
The focus of our work is on images captured by drones in large-scale scenes with GPS information. One of our goals is to establish a measurable NeRF model for these scenarios, which requires GPS data for real-world scale conversion. We performed a coordinate system transformation prior to modeling.
In the UE4 coordinate system, the positive direction of the X-axis points to the east, the positive direction of the Y-axis points to the south, and the positive direction of the Z-axis points up. The scale is 100:1. Compared with other NeRF models, it is also very meaningful and novel to be able to measure the actual scale in the scene modeled by NeRF.
Additionally, the current version of our method relies on GPS data for various processing steps such as block division, optimal modeling height calculation, and ground estimation. However, existing public datasets lack GPS information within their images. In principle, if we remove the reliance on GPS information, our approach is feasible for a variety of datasets. We are in the process of exploring the generalizability of the proposed approach tailored for aerial data, to be applicable across diverse scenarios.
A3: When dividing the area, we accurately divide the scene into multiple areas, and there is no intersection between the target modeling areas of each area, but when modeling, in order to avoid the influence of the area mask error, the actual modeling the area is expanded outward by 1.333 times, but when exporting, we do not export the mesh of the expanded area.
A4: The higher the threshold, the more the final model tends to capture opaque objects, while a lower threshold performs better in modeling translucent objects, but it results in more triangles during export, leading to increased rendering and storage costs. After comparing visualization effects and triangle counts across various threshold values, we found that a threshold of 0.3 strikes a good balance between effective modeling of translucent objects and reasonable triangle count.
A5: Thanks for your reminder. A triangular face will only be clipped if the opacity of all intersection points on that face is below 0.3. In our rendering process, we haven't defined a minimum opacity threshold, and any valid opacity on effective triangular faces is preserved during asynchronous feature map computation. There's no threshold truncation. Hence, in practical rendering scenarios, this issue is scarcely encountered.
A6: Currently, a significant portion of our related work is built upon the UE engine, encompassing features such as dynamic model loading, asynchronous feature map inference, and LOD layer calculations. Our rendering process is not a static loading process similar to mobile-nerf's static inference of feature maps and saving Mesh grids, and then the rendering engine loads for rendering, but real-time on-demand inference of feature maps and switching of LOD layers in rendering. The model is a dynamic process of predicting the model area that needs to be loaded and unloaded, and releasing data such as textures in the memory and cache when necessary. A large number of operations involve the implementation of specific APIs that depend on the UE4 engine. While theoretically adaptable to rendering engines like Unity, we have not been experimentally validated in other engines. Hence, in the interest of rigor, we refer to our approach as "UE4-Nerf."
---
Rebuttal 2:
Title: Dear Reviewer 3kud, we are looking forward to your response.
Comment: Dear Reviewer 3kud, thank you for your concerns. In response to your question, we elaborated on the differences with Mobile-NeRF and supplemented our experiments. We apologize for the inconvenience of reaching out again, but as the deadline for the discussion phase is approaching, we find it necessary to send this reminder. ***We are anxiously awaiting your response and looking forward to further dialogue.***
**Best regards and thank you once again.** | Summary: The authors introduce a multi-scale surface based representation for radiance field reconstruction and real-time rendering of large-scale scenes. The method involves subdividing the scene into partitioned scenes initialized with a regular octahedron mesh. Through joint optimization of the vertices' positions and multi-resolution hash grids, the proposed method incorporates two rendering loss functions and depth supervision. Experimental results demonstrate that the proposed method achieves improved rendering quality while maintaining real-time rendering capabilities (>30 FPS) for large-scale scenes at 4K resolution.
Strengths: 1. The manuscript successfully tackles the crucial challenge of fast training and rendering for large-scale scenes. Excellent work!
2. The multi-scale meshes representation seems fresh and interesting.
Weaknesses: 1. Presentation. The "transient object" section distracted me a lot, I don't this manuscript work on the dynamic scenario, correct me if I am wrong.
2. Evaluation. The proposed method is only evaluated on the UAV dataset, leading it hard to judge its general performance, would be great to have some results on standard dataset such as mipnerf 360 and Tanks and Temple.
3. Comparison. I could not find any comparison or discussion regarding Mobile-NeRF, even though it is the closest method to the approach presented in this manuscript.
4. Fair comparison. Table 1 shows the proposed method provides clearly better performance than previous Mip-NeRF, NeRF-W NeRFacto and Instant-NGP, however, these methods don't use depth supervision by default, did you plug in the additional loss for these methods or use their official setting?
5. Anonymous. I don't think putting the project link associated with your personal GitHub account to the manuscript is a good idea. I'm not sure if it violates the anonymity rule, but I don't take that into account when scoring now and bring it up in hopes of hearing the authors' and another review's perspectives.
Minor:
The caption of Fig. 2: we partition the ...
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. L137, why tilted surfaces can result in unstable convergence while regular octahedrons wouldn't?
2. The usage of "epochs" is confusing and not clear to me what is it? Do you want to refer it to "iterations"?
3. L196, how dense point clouds are used in the manuscript?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the authors provide a limitation section and make sense to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: A1. We can use masks to exclude dynamic objects (such as people or vehicles) and prevent them from affecting the rendering results.
A2. Sincerely thank you for your suggestion.
The focus of our work is on images captured by drones in large-scale scenes with GPS information. One of our goals is to establish a measurable NeRF model for these scenarios, which requires GPS data for real-world scale conversion.
We performed a coordinate system transformation prior to modeling. We converted coordinates into a physically meaningful framework, establishing two transformations from NeRF coordinates to UE4's coordinates and GPS coordinates to UE4's coordinates.
In the UE4 coordinate system, the positive direction of the X-axis points to the east, the positive direction of the Y-axis points to the south, and the positive direction of the Z-axis points up. The scale is 100:1. Compared with other NeRF models, it is also very meaningful and novel to be able to measure the actual scale in the scene modeled by NeRF.
Additionally, the current version of our method relies on GPS data for various processing steps such as block division, point cloud generation from feature points, optimal modeling height calculation, and ground estimation. However, existing public datasets lack GPS information within their images. In principle, if we remove the reliance on GPS information, our approach is feasible for a variety of datasets. We are in the process of exploring the generalizability of the proposed approach tailored for aerial data, to be applicable across diverse scenarios.
A3. Based on our previous experiments, training a block with Mobile-NeRF takes around two days on 4 * 3090Ti GPUs. Our focus is on modeling large scenes, and if we were to model a scene as described in the manuscript, it would require around two months. Additionally, Mobile-NeRF demands significant GPU memory during training, and the generated faces and features necessitate substantial storage. It's not feasible to load such a large number of models for real-time rendering. Considering the rationale for comparison, we did not provide a complete comparison in the manuscript. **In our "Author Rebuttal", we discussed the comparison between our approach and Mobile-NeRF. Moreover, in the submitted PDF, we provided both qualitative and quantitative comparisons with Mobile-NeRF.**
A4. In order to further speed up the convergence and reduce the abnormal phantom-like floating objects in the air, we propose pseudo-depth to optimize the training, which itself is an innovative method proposed for our scenario. Compared with others, their official code is used, but the pseudo-depth method we proposed is suitable for traditional NeRF methods, and we can try to apply pseudo-depth to the mentioned NeRF framework later. Your invaluable review and insights are greatly appreciated.
A5. Thank you. While it is a private GitHub account, we removed all personal information before manuscript submission. This maintains anonymity and adheres to the principle of anonymity.
A. 5.5 Thanks for your suggestion, we will modify it in the revised version.
A6. The grid layout we use differs from that of Mobile-NeRF. A regular octahedron consists of a total of 20 faces, which includes the 8 tilted exterior faces and the 12 interior faces. We attempted Mobile-NeRF's grid layout (initially without tilted surfaces), but found that it required more training epochs to achieve a fitting result on tilted roofs (reaching a visual quality similar to our current approach required an additional 20,000 epochs of training, whereas our method only needed 80,000 epochs). Moreover, the rasterization artifacts were noticeable in certain shrub areas (although they improved with further training, it took an additional 50,000 epochs to significantly reduce the visible artifacts). On the other hand, our octahedral grid (including 12 interior triangular faces) handles more complex face-to-face intersections. The average distance from any point in space to the nearest triangular face is smaller than in Mobile-NeRF's grid layout. This results in faster convergence in the vertex optimization part of our approach.
A7. I'm sorry for confusing you with my unclear expression. Yes, epochs refers to iterations.
A8. The point cloud points here are derived from feature points with minimal matching loss during camera pose estimation. We employ them as sparse depth supervision targets. Given that this point cloud is extremely sparse, with a ratio of about 1/10000 compared to pixels, we use a certain neighborhood around the projected coordinates of the point cloud on the image as supervision targets and confidence diminishes with increasing projection distance, resulting in lower supervision strength. Specific implementation details can be found in the attached document.
---
Rebuttal Comment 1.1:
Title: additional results
Comment: Thanks for the rebuttal, it resolves my concerns.
The previous main concerns from the reviewers are the incomplete comparisons and unclear exposition, given the authors provide some new results, would love to hear back from Reviewer x4aP and JaPB.
---
Reply to Comment 1.1.1:
Title: Thanks for your affirmation of our response.
Comment: Thanks for your affirmation of our response. We responded to your concerns in detail and added additional experimental results. Diagrams are in the submitted PDF.
If you have any further concerns, please do not hesitate to tell us. Thanks again!
---
Rebuttal 2:
Title: Dear reviewer eVeu, the Reviewer x4aP and JaPB you mentioned responded to our Rebuttal and improved the rating!
Comment: Dear reviewer eVeu,
I am reaching out again because reviewers you mentioned responded to our rebuttal and raised the rating. We included additional comparative experiments with Mega-NeRF **on public datasets** as you suggested. Kindly review our discussion with reviewer x4aP. UE4-NeRF has demonstrated remarkable outcomes.
Best regards and thank you once angin! | Summary: This work presents a system to represent a large scene using NeRF given drone-captured photos. It partitions a large scene into overlapping smaller tiles, and represent each tile with a sub-NeRF. To enable real-time rendering, it represents each sub-NeRF using meshes and neural textures (represented as a encoder-decoder network) during training; after training, meshes are further simplified, and the encoder part of neural textures are converted to texture image on request, while the decoder part is implemented as a GLSL shader. The whole real-time rendering system for the proposed NeRF representation is built on top of Unreal Engine 4, and allows for scene editing.
Strengths: The proposed method not just trains fast, but also renders fast. It's cool to see the whole NeRF real-time rendering system integrated into the existing renderer, Unreal engine 4.
Weaknesses: 1. Writing could use some improvements; many key technical details seem missing, making the paper a bit hard to understand.
* 1.1 The frame rate of real time rendering is reported on what device?
* 1.2 Line 129-132 says that "we train a coarse model for sub-region segmentation..."; what model was trained here? Is it counted in the total training time of the proposed method?
* 1.3 Line 143 says "it generates 8D feature vector, which incorporates opacity information"; this seems very confusing to me, as in Eq. 1, the opacity is predicted alongside the feature vector.
* 1.4 Line 167-169 seems to suggest that max should be used in Eq. 5, as opposed to min?
* 1.5 Octahedron with 20 faces. Might be better to explain why there're 20 faces instead of regular 8 faces.
* 1.6 Fig 5 village scene, the lower-right inset of ours does not seem to align with those of baselines.
* 1.7 How is the comparison with baselines with MipNeRF and InstantNGP performed? Were the baselines trained on each individual blocks or they were trained on the whole scene?
* 1.8 Line 248-254: alpha values seem to be dropped during real-time rendering. If so, please make it explicit, and include examples how such dropping affects the final rendering quality.
* 1.9 Fig. 1c seems confusing without labels "before editing" and "after editing". In addition, how was the inpainting performed when the building/car were removed?
* 1.10 I find it hard to understand what's going on in line 181-185. The authors might want to add some illustrations.
* 1.11 Line 52-54 seems to suggest that the speed up of training, compared with MobileNeRF, is due to implement some compute-heavy components using customized cuda functions. What are these components?
2. Ablations and comparisons can be improved; it's hard to judge if the proposed components are useful or not, and how it compares against baselines.
* 2.1 The BungeeNeRF work seems to a better baseline than MipNeRF; but for some reason, comparison with that work is missing.
* 2.2 Why not compare with MobileNeRF on each tile? Line 135-137 claims that the proposed mesh initialization addresses the unstable convergence issue of MobileNeRF; is there any intuitive explanation or empirical evidence to support this claim?
* 2.3 Ablations are needed to justify the effectiveness of the loss function in Eq. 6.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. It seems that the proposed method is a bit complicated during training phase. Will the training code be released for reproducibility upon acceptance?
2. Would the proposed method also work for blender, besides the unreal engine 4?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: A1.1 In UE4, you can input the command "stat fps" to show frame rate.
A1.2 Training a NGP model on low-resolution images of the entire scene aids us in better segmenting the scene. This step takes only a matter of minutes.
A1.3 Opacity is solely dependent on position and not influenced by direction. As a result, we can directly obtain opacity through the encoder. Utilizing the encoder, we derive an 8-dimensional feature vector, where one dimension's feature can be transformed into opacity via a nonlinear transformation. This is why we assert that the 8-dimensional feature vector encompasses opacity information.
A1.4 Eq 5 is correct. The lines 170-173 should be revised to read: "In the initial 10,000 epochs, the f is maintained at 0., and afterward, the f increases as the number of epochs increases, but does not exceed 0.3."
A1.5 A regular octahedron has 20 faces in total, including the 8 exterior faces and 12 interior faces. **We have included illustrative diagrams of three representative interior faces in the submitted PDF(Figure 3(c)) to aid your understanding.**
A1.6 To facilitate accurate measurements with our established model, we performed a coordinate system transformation to make it have physical meaning. There exists a slight discrepancy between coordinates in UE4 and those estimated by COLMAP, but this error is minor and has negligible impact on the overall visual comparison. Currently, we have rectified this discrepancy and assure that in subsequent material revisions, adjustments will be made for consistency, enhancing the visual prominence.
A1.7 They were trained on the whole scene.
A1.8 We do not discard opacity values. The Alpha information is preserved within the 8 channels, and it is calculated non-linearly from one of those channels.
A1.9 The first line is before editing, and the second line is after editing. We directly remove the triangular mesh from the area and copy a portion of triangular mesh from surrounding similar regions.
A1.10 **We illustrate the generation of different levels of detail (LOD) in the submitted PDF (Figure 3(a)).** We choose the point in the grid with the largest opacity as the point of the synthesized grid to generate the octahedron, and the point on the edge is not used as the center point of the octahedron.
A1.11 Component: UV sampling based on vertices, generating feature maps from UV-sampled points, followed by BC4 compression. The description of "therefore" in line 53 of the manuscript is not entirely accurate. The impact is not solely due to the implementation in CUDA; this is merely one contributing factor, albeit not the determining one. Our approach, even without utilizing CUDA implementation, achieves training times of less than 2 hours for a region on a single 3090Ti. With CUDA , this time is further reduced to less than 1 hour.
A2.1 Thanks for your suggestion, and we also tried. But it's highly impractical to obtain a wide range of multiscale images, from low to high, for training BungeeNeRF. The data used in BungeeNeRF is derived from various scales of rendered images of models in Google Earth, rather than being captured from real-world imagery. **In the "Author Rebuttal", we extensively compared our approach with Mega-NeRF and Mobile-NeRF.**
A2.2 Based on our previous experiments, training a block with Mobile-NeRF takes around two days on 4x3090Ti GPUs. Our focus is on modeling large scenes, and if we were to model a scene as described in the manuscript, it would require around two months. Additionally, Mobile-NeRF demands significant GPU memory during training, and the generated faces and features necessitate substantial storage. It's not feasible to load such a large number of models for real-time rendering. Considering the rationale for comparison, we did not provide a complete comparison in the manuscript. **In our "Author Rebuttal" and submitted PDF, we discussed the comparison between our approach and Mobile-NeRF.**
The grid layout we use differs from that of Mobile-NeRF. We attempted Mobile-NeRF's grid layout (initially without tilted surfaces), but found that it required more training epochs to achieve a fitting result on tilted roofs (reaching a visual quality similar to our current approach required an additional 20,000 epochs of training, whereas our method only needed 80,000 epochs). Moreover, the rasterization artifacts were noticeable in certain shrub areas (although they improved with further training, it took an additional 50,000 epochs to significantly reduce the visible artifacts). On the other hand, our octahedral grid (including 12 interior triangular faces) handles more complex face-to-face intersections. The average distance from any point in space to the nearest triangular face is smaller than in Mobile-NeRF's grid layout. This results in faster convergence in the vertex optimization part of our approach.
A2.3 **In the submitted PDF (Figure 5), we provide rendered results without the second part of the loss.** When not utilizing the second part of the loss, the model tends to utilize a multitude of low-opacity faces to fit in regions of low color variation and semitransparency. This leads to accidental deletion of faces that should be part of the model's surface when exporting triangles. In scenes with glass and shadows, more triangles may be unexpectedly deleted. This effect is especially prominent on glass surfaces in the picture, where numerous triangles may be unintentionally removed.
A3. If our manuscript is fortunate enough to be accepted, we plan to make the training code publicly available!
A4. It seems you also have a keen interest in computer graphics! Currently, much of our related work is based on the Unreal Engine. While our method is theoretically applicable to other rendering engines like Unity or Blender, we haven't conducted experimental validation in those environments. Once again, thank you for the questions and suggestions you've raised!
---
Rebuttal 2:
Title: Dear Reviewer PLf2, we are looking forward to your response.
Comment: Dear Reviewer PLf2, thank you for your concerns. We apologize for the inconvenience of reaching out again, but as the deadline for the discussion phase is approaching, we find it necessary to send this reminder. ***We are anxiously awaiting your response and looking forward to further dialogue.***
**Best regards and thank you once again.** | Summary: This paper presents a method that combines NeRF and the Unreal Engine for real-time rendering of large-scale scenes. The method first partitions large scenes into sub-blocks, and represent NeRF via polygonal meshes initialized from regular octahedron. The opacity and feature vector are represented via hash-encoding. The mesh, opacity, and features are optimized during training. Inspired by LOD, the method trains meshes with different levels of detail to improve rendering efficiency at different scales. The optimized mesh can be integrated into the Unreal Engine 4 to achieve real-time rendering of large-scale scenes. Experiments demonstrate that the proposed method achieves better rendering quality than existing approaches and supports real-time rendering.
Strengths: - How to achieve real-time rendering of large-scale scenes under the NeRF setting is an important problem. This paper presents, to the best of my knowledge, the first solution to this problem. The main idea is to represent the NeRF via polygonal meshes so that they can be combined with the rasterization pipeline in Unreal Engine after optimization. Several other designs are introduced such as block partition and LOD representations. The overall design is reasonable.
- The experimental results are impressive. UE4-NeRF shows clearly better rendering quality than other baselines such as Mip-NeRF and Instant-NGP. It is also the only method that supports real-time rendering of 2k large-scale scenes.
Weaknesses: - Some baselines are lacking.
- The closest method to the proposed UE4-NeRF is Mobile-NeRF, as it also represents NeRF via polygonal mesh. However, the comparison to Mobile-NeRF is missing. According to line 53, UE4-NeRF is faster than Mobile-NeRF because it implements computationally intensive portions with CUDA. However, this advantage comes from different implementations rather than the method itself. Authors should compare the two methods in a fair setting to demonstrate the benefits of the techniques proposed in this paper.
- I suggest authors to also compare with Mega-NeRF (Mega-nerf: Scalable construction of largescale nerfs for virtual fly-throughs), which also studies building NeRF for large-scale scenes.
- The writing lacks clarity. For example:
- For the initialized mesh, line 10 mentions "octahedra" (i.e., 8 faces), line 55 mentions "tetrahedra" (i.e., 4 faces), while line 137 mentions "octahedron with 20 faces". These terms are inconsistent and make it very confusing which one is correct.
- At Eq.(1), it is not explained what is $P_i$. Is it the 3D coordinate of the point of intersection? Please clarify.
- At line 142 "encoded by multi-resolution hash functions", authors should mention that this follows Instant-NGP and cite it.
- At line 208, what is "acceleration grid"?
- In Fig.5, not all baselines are included. Authors should provide the qualitative results of other baselines in the supplementary material. The ground truth is also missing.
Besides, I noticed that the camera poses for different methods are not exactly the same. The camera poses should keep consistent in qualitative comparison.
- Grammar:
- Line 217: "we incorporate parallel lights from various angles are added above." There are two verbs in this sentence.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Authors may respond to my concerns in the weaknesses.
- At line 290, it is mentioned that "the actual rendering quality of UE4-NeRF is expected to surpass the metric’s performance, as achieving consistent exposure matching with the original image is challenging within the Unreal Engine environment." Then I suggest authors also report the performance using original volume rendering to remove the exposure gap.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: A1: Thank you for your encouragement and advice. The description of "therefore" in line 53 of the manuscript is not entirely accurate. The impact is not solely due to the implementation in CUDA; this is merely one contributing factor, albeit not the determining one. Our approach, even without utilizing CUDA implementation, achieves training times of less than 2 hours for a region on a single 3090Ti. With the integration of CUDA implementation, this time is further reduced to less than 1 hour.
Our training strategy diverges from Mobile-Nerf, and we optimize the method for generating polygonal meshes. **In our submitted PDF and Author Rebuttal, we provided both qualitative and quantitative comparisons with Mobile-NeRF.** There are several main reasons for the slow training speed and poor rendering quality of Mobile-NeRF:
1. The model backbone's inference is slow, resulting in slow training for individual batches. Additionally, each batch demands a substantial amount of GPU memory. With a constrained GPU memory size, further expanding the batch size is not feasible. Consequently, training necessitates a greater number of epochs to complete;
2. Slow convergence of the grid-like structure. We attempted Mobile-NeRF's grid layout (initially without tilted surfaces), but found that it required more training epochs to achieve a fitting result on tilted roofs (reaching a visual quality similar to our current approach required an additional 20,000 epochs of training, whereas our method only needed 80,000 epochs). Moreover, the rasterization artifacts were noticeable in certain shrub areas (although they improved with further training, it took an additional 50,000 epochs to significantly reduce the visible artifacts). On the other hand, our octahedral grid (including 12 interior triangular faces) handles more complex face-to-face intersections. The average distance from any point in space to the nearest triangular face is smaller than in Mobile-NeRF's grid layout. This results in faster convergence in the vertex optimization part of our approach.
3. Time-consuming separation of sampling points for triangular faces into strictly transparent and opaque regions. This process contributes significantly to training time.
4. The absence of depth information supervision also leads to slow convergence. In contrast, our approach benefits from sparse depth supervision, reducing the number of epochs required for training.
A2: **In our submitted PDF and "Author Rebuttal", we have conducted a comprehensive comparison with Mega-NeRF.**
A3: The description on line 55 is inaccurate and should be "octahedron" instead of " tetrahedra." A regular octahedron has 20 faces in total, **including the 8 exterior faces and 12 interior faces( In PDF Figure 3(c)).**
A4:Yes!
A5:We will further explain the hash function and cite instant-NGP in the revised version.
A6:In the training architecture of "Instant-NGP", there exists a 128x128x128 binary dense grid structure designed to efficiently skip empty regions during rendering. We have leveraged this dense grid while calculating the visible grid, allowing for rapid culling of empty regions within the triangular mesh. **In the submitted PDF(Figure 3(b)), we illustrate this method.**
A7:Thank you for your attention to this detail. In the revised version, we will provide qualitative results for other baselines in the supplementary material.
To facilitate accurate measurements with our established model, we converted coordinates into a physically meaningful framework, establishing a transformation from GPS coordinates to UE4's coordinates. In the UE4 coordinate system, the positive direction of the X-axis points to the east, the positive direction of the Y-axis points to the south, and the positive direction of the Z-axis points up. The scale is 100:1. Compared with other NeRF models, it is also very meaningful to be able to measure the actual scale in the scene modeled by NeRF. However, There exists a slight discrepancy between coordinates in UE4 and those estimated by COLMAP, but this error is minor and has negligible impact on the overall visual comparison. Currently, we have rectified this discrepancy and assure that in subsequent material revisions, adjustments will be made for consistency, enhancing the visual prominence.
To aptly showcase more intricate details from diverse perspectives, we have opted to compare images captured from free viewpoints rather than images from the training viewpoint. Consequently, the ground truth is missing.
A8:Thanks for your suggestion, we will modify it to "we incorporate parallel lights from various angles above".
A9:We responded to all weaknesses. Sincere thanks for all your suggestions!
A10:Due to the significant presence of intersecting faces in our octahedral mesh, while it aids model convergence during training, it introduces complications in volume rendering. In our method, during step-wise rendering, a substantial number of triangle intersections and depth sorting are required before rendering can proceed according to the transparency blending formula. This results in slow rendering speeds when adopting a volume rendering approach. Additionally, the challenge of blending multiple models from various regions further adds complexity. Addressing this issue demands efforts akin to developing a dedicated rendering engine, which contradicts our initial intention of integrating rendering seamlessly into the UE rendering engine. One positive development is that we've successfully resolved the camera exposure issue in UE4 by replacing the tone mapper with a custom tone mapper during the post-processing phase of rendering. In the revised version, we will update our rendering results. **In the submitted PDF, the presented results for UE4-NeRF depict the outcomes achieved after resolving the exposure gap, revealing an observable improvement in rendering quality.**
---
Rebuttal Comment 1.1:
Title: Replying to rebuttal
Comment: Thank you for your response. The rebuttal has addressed most of my concerns. So I raised my rating accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for your affirmation of our response!
Comment: Thank you for your affirmation of our response. Improving the rating gave us great encouragement! If you have any further concerns, please do not hesitate to tell us. Thanks again! | Rebuttal 1:
Rebuttal: ## Comparison with Mobile-NeRF.
**Dataset** We test the performance of Mobile-NeRF in a block. It contains 239 pictures, each with a resolution of 6000x4000.
In Table 1, we see that Mobile-NeRF takes 2 days to train just one block and requires 4x3090ti GPUs. If it trains the whole scene, it takes two months. And our method only need 40 minutes to train a block (1x3090ti), when training multiple blocks, we can perform multi-GPUs parallel training. In Figure 2, the left side is comparison of the rendered image under the training perspective, and the right side is comparison of the final result under free perspective in the renderer. Compared with Ground truth, UE4-NeRF shows amazing rendering results, and Mobile-NeRF produces obvious blurring. The results in the final renderer on the right show that our real-time rendering results are also much better than Mobile-NeRF. Mobile-NeRF finally uses webgl for real-time rendering. We speculate that the poor rendering ability of webgl has a certain impact on the rendering quality. WebGL sorts objects according to the coordinates of the center of the object instead of sorting according to the surface, and there will be various strange interlacing.
## Differences between Mobile-NeRF and UE4-NeRF.
1.The training strategy is different. Instead of Mobile-NeRF's three-stage training, we adjust weights of each part of the loss function in stages during training to control the tendency of the training process.
2.We train 5 levels of meshes at the same time, and the overhead of 5 levels of mesh training is only increased by 1/7 compared to training only the highest-precision mesh. And UE4-NeRF can control the rendering overhead by dynamically switching between different levels of precision of the model during rendering.
3.We use a different grid layout than Mobile-Nerf. In our mesh layout, the vertices converge faster.
4.The export strategy is different. Our method of calculating visibility is different from that of Mobile-NeRF, and we only save the structural relationship between vertices and faces and the UV coordinates of vertices, and do not directly save the features on the faces as texture maps.
5.The rendering strategy is different. Our approach is a dynamic loading and rendering strategy. The asynchronous process only takes less than 1.5 seconds. For comparison, Mobile-NeRF is an offline precomputation and real-time rendering strategy, which requires all the feature maps on all surfaces to be computed when the model is exported, resulting in exponentially higher hardware resource requirements in large scenes. We can infer higher-precision feature maps in the rendering stage without requiring additional storage space. We use higher precision 32x32 feature maps, while Mobile-NeRF uses 20x20 feature maps and requires more storage space.
6.The Mobile-NeRF is unable to model any translucent objects, but our method can approximate modeling and rendering translucent objects in UE4. We achieve our goal of rendering translucent objects by alpha dithering and temporal fusion.
7.Measuable.
## Comparison with Mega-NeRF.
**Dataset** We measure the performance of Mega-NeRF in a scene. The scene contains 2000 images, each with a resolution of 6000x4000.
For each block, Mega-NeRF needs 36 hours to train for 500,000 epochs, while we only need 40 minutes to train for 80,000 epochs to reach convergence. When training the entire scene, Mega-NeRF requires 12 hours of additional time, but UE4-NeRF only needs 1 hour. It is worth noting that Mega-Nerf generates a large number of temporary files during training. In Figure 1, The left side is comparison of the rendered image under the training perspective, and the right side is comparison of the final result under the free perspective in the renderer. Compared with Ground truth, UE4-NeRF shows amazing rendering results, and Mega-NeRF produces obvious blurring. The results in the final renderer on the right show that our real-time rendering results are also much better than Mega-NeRF. Real-time rendering results in Mega-nerf-viewer are weird. Through our experiments, the maximum resolution supported by Mega-nerf-viewer is only 800x800 (1x RTX3090ti), when we enlarge its resolution, the program crashes.
## Supplementary Explanation of NGP and Block Strategy.
The NGP approach employs dynamic resolution and multi-frame fusion to achieve high frame rates for real-time rendering. There is a noticeable presence of large pixel artifacts when the window size is larger. In NGP experiments, we assessed the rendering speed and computational cost at a fixed 2K rendering resolution. Even with a RTX 3090ti , it can only support rendering of a single high-complexity nerf model, yielding a frame rate of under 30fps. Further elevating the rendering resolution would lead to insufficient VRAM capacity, causing crash.
In the past, we also attempted to directly combine NGP with block to model large scenes. For a large scene, we use 32 lower-complexity NGP models to model each area, minimizing the memory usage of a single model's hash grid. When rendering, the method of dynamically loading and selecting models is combined to further reduce the occupation of GPU memory and computing resources. The modeling accuracy was improved when using NGP with block strategy compared to using a single NGP model for the entire scene. However, despite these efforts, the rendering speed remained relatively low in practical. We could achieve approximately 10 fps at 720P resolution using two 3090Ti. Furthermore, doubling the resolution led to a twofold increase in VRAM usage and halved the frame rate. Therefore, only the method of combining the block strategy with NGP cannot realize high-resolution real-time rendering of the nerf model modeled in a large scene. Even if our method loads 32 blocks, it can achieve a rendering frame rate of 50fps at 2K resolution on a 3090ti, and can also achieve a relatively high rendering frame rate at 4K resolution.
Pdf: /pdf/80779b56951d3bade0986bbdba805514a0532e0c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces UE4-NeRF, a system that combines Neural Radiance Field (NeRF) with the Unreal Engine 4 (UE4) for real-time rendering and editing of large-scale 3D scenes. To achieve this, the system partitions scenes into sub-NeRFs and represents them using optimized polygonal meshes based on regular octahedra and tetrahedrons. Leveraging a Level of Detail (LOD) approach and the powerful development capabilities of UE4, the system enables high-performance rendering at different observation levels and seamless scene editing. Experimental results show that UE4-NeRF achieves rendering quality comparable to state-of-the-art methods. and also accelerates the training speed.
Strengths: - This paper describes a method that addresses the scalability limitations of NeRF by dividing the scene into smaller chunks and enabling parallel processing. This allows for the construction of large-scale NeRF models with reduced memory requirements and increased computational efficiency.
Weaknesses: - Overall results: The results present an aerial view of an urban scene with good resolution. However, it is not clear how the authors leverage NeRF's capabilities in this scenario. Specifically, a comparison with traditional Multi-View Stereo (MVS) should demonstrate view-dependent effects, subtle details, transparent objects, and how it performs compared to multi-view methods. Additionally, it would be beneficial to include a first-person wandering case similar to Urban NeRF[32].
- Contribution: From my understanding, this method combines previous work on Urban NeRF[32] scene partitioning with Instant-NGP. While it has its merits, compared to Instant-NGP, the qualitative improvement in the results is marginal overall.
- Experiments: The authors have compared their method with their own dataset, which is commendable. However, it is encouraged to include comparisons with established datasets such as the one used in Mega-NeRF[40]. Additionally, qualitative and quantitative comparisons with Mega-NeRF[40] would provide further insights.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: see above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Based on the concerns raised, the marginal qualitative improvement compared to Instant-NGP, lack of comprehensive comparisons with established datasets, and unclear demonstration of NeRF's capabilities in the urban scene, it is recommended to borderline reject the paper unless relatively significant revisions are made.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **In the submitted PDF (Figure 4), we have provided additional qualitative comparisons with MVS.** Our experiments offer comparisons in transparent objects as well as subtle details. MVS utilizes sparse reconstruction to extract feature points, which are then expanded based on morphological and color differences to generate a dense point cloud. This dense point cloud is further used for surface reconstruction, resulting in triangulated meshes. However, due to the dynamic nature of water, the feature points extracted from images taken at different moments and perspectives often lack consistent and mutual matches. Consequently, when using MVS for reconstruction, water bodies may exhibit a substantial number of gaps or holes. Additionally, surface reconstruction methods are not well-suited for handling multiple surfaces, particularly situations involving multi-layered object surfaces due to semi-transparency.
Within the principles of MVS, there exists a parameter controlling the neighborhood range. Generally, the default neighborhood value prioritizes hole avoidance, which can result in suboptimal modeling effects for object detail structures and smaller objects. In MVS modeling, textured models are created by selecting an appropriate patch from all images to serve as the texture for a triangular face. Consequently, the color observed for this triangular face remains consistent from any angle during rendering. However, this approach works well only for objects that ideally adhere to the diffuse reflection model, particularly those that are opaque. In reality, some objects exhibit highlights and semi-transparency, and MVS-generated textures cannot accurately reflect these characteristics. As a result, rendering visual effects can be subpar. Our approach does not encounter the issues mentioned above.
Urban-NeRF is trained on a dataset of horizontally captured street views, making it most suitable for simulating first-person wandering along trekker motion routes. In contrast, our training images are taken from a top-down drone perspective, representing an aerial viewpoint. This aerial perspective aligns with the first-person wandering view in our application. Our project's demonstration video showcases first-person aerial wandering. Additionally, we have included our rendering plugin code and model, allowing users to control camera poses and perform real-time rendering independently.
- Thanks for your comments. Actually, our proposed method is quite different work as combination of Urban NeRF and NGP. The main differences are as followings:
1. Distinct methods of scene representation. We trained meshes of multiple scales to represent a scene. And effectively combined with the LOD system in UE4.
2. The rendering principle is different. In UE4, the high-quality feature texture map on the mesh surface is asynchronously inferred and the MLP inference in the real-time rendering pipeline is combined to achieve high-resolution real-time rendering. However, NGP cannot be integrated into rendering engines such as UE4 for use, nor can it be conveniently added to the modeling scene for mixed rendering.
**Please refer to "Supplementary Explanation of NGP and Block Strategy." in "Author Rebuttal" for further discussion**. Through our experiments, simple combination of NGP and block strategy cannot achieve high quality and high frame rate rendering. We just combine the hash grides to minimize the training time for a single epoch. And in large scenes, if real-time rendering is considered, partitioning strategy is basically the only option given limited hardware storage and computing resources. The focus of our paper is not the combination of the choice of block strategy and a certain nerf training method, but the rapid construction of each block area on a single RTX3090-level graphics card in less than 1 hour, and the ability to achieve high resolution Rendering in real time.
Our method uses multi-scale meshes to represent the scene and greatly optimizes the rendering process, achieving high rendering accuracy, fast speed, and high frame rate. We managed to attain 50 fps at 2K resolution using a single 3090Ti. Moreover, as we further increased the screen resolution to 4K, the main impact was just on the frame rate, with a relatively minor increase in VRAM usage. In summary, our approach extends beyond a mere amalgamation of NGP and block-wise strategies. We exhibit discernible advantages in rendering scene scale, quality, speed, memory utilization, and other relevant aspects.
- Thank you for your encouragement and advice. **In submitted PDF and "Author Rebuttal", we conducted a comprehensive comparison with Mega-NeRF**. The focus of our work is on images captured by drones in large-scale scenes with GPS information. One of our goals is to establish a measurable NeRF model for these scenarios, which requires GPS data for real-world scale conversion. We performed a coordinate system transformation prior to modeling. In the UE4 coordinate system, the positive direction of the X-axis points to the east, the positive direction of the Y-axis points to the south, and the positive direction of the Z-axis points up. The scale is 100:1. Compared with other NeRF models, it is also very meaningful and novel to be able to measure the actual scale in the scene modeled by NeRF. Additionally, the current version of our method relies on GPS data for various processing steps such as block division, point cloud generation from feature points, optimal modeling height calculation, and ground estimation. However, existing public datasets lack GPS information within their images. In principle, if we remove the reliance on GPS information, our approach is feasible for a variety of datasets. We are in the process of exploring the generalizability of the proposed approach tailored for aerial data, to be applicable across diverse scenarios.
---
Rebuttal Comment 1.1:
Comment: I have reviewed the comments from other reviewers as well as the author's response. I appreciate the author's efforts in conducting additional experiments with MVS and Mega-NeRF. However, for the Mega-NeRF experiment, the author opted for their own dataset instead of the established public dataset provided by Mega-NeRF. This choice makes it challenging for readers to evaluate the results with confidence. While I recognize the author's aim to highlight the uniqueness of their data's GPS information, it remains meaningful to perform experiments on a public dataset. I'll await the author's response to this point before making my final decision.
---
Reply to Comment 1.1.1:
Title: Dear reviewer, we supplemented our experiments on public datasets.
Comment: Thank you for the suggestions you provided. We incorporated additional comparative experiments using the dataset introduced in Mega-NeRF, and included links to the comparative result images in each subheading.
### [Comparison with Mega-NeRF](https://github.com/JamChaos/UE4-NeRF/blob/master/pictures/ours_mega_nerf_compare.jpg):
We conducted qualitative comparisons with Mega-NeRF using the dataset it provided. We employed the pre-trained model for the "building" scene provided by Mega-NeRF's authors, which was divided into 8 sub-regions. Given that the scene provided by the authors is close to a square shape, following the blocking strategy of UE4-NeRF, we partitioned the "building" scene into 3x3 , which aligns closely with the blocking scale of Mega-NeRF. We performed visual comparisons within final real-time renderer and Mega-NeRF employs dynamic rendering to enhance rendering quality. Additionally, Mega-NeRF's dataset lacked controlled camera exposure and suffered from lengthy capture times, resulting in ambiguous observations of ground shadows from different viewing angles. Consequently, our approach exhibits some gaps in certain ground shadow areas and occasional anomalies such as artifacts that appear to float. **Despite these challenges, the final comparison is still striking. Our rendering quality significantly outperforms that of Mega-NeRF. Moreover, on an RTX3090, UE4-NeRF achieves higher frame rates at higher resolutions compared to Mega-NeRF.**
### [Comparison of whether Mega-NeRF enables dynamic rendering](https://github.com/JamChaos/UE4-NeRF/blob/master/pictures/mega_nerf_static_dyn_compare.jpg):
In Mega-NeRF, enabling dynamic rendering results in a 2-second lag before the image becomes clear. While dynamic rendering does improve rendering quality to a certain extent, over time, Mega-NeRF's rendering frame rate drops to 15fps (and as low as 1fps during motion), leading to a suboptimal interactive rendering experience. **On an RTX3090, UE4-NeRF achieves a rendering speed of 50fps at a resolution of 2K, even at a higher rendering precision compared to Mega-NeRF. It also offers a seamless interactive rendering experience.**
### [Demonstration of Mega-NeRF's rendering results from far to near](https://github.com/JamChaos/UE4-NeRF/blob/master/pictures/mega_nerf_dyn_far_to_near.jpg):
We presented images from Mega-NeRF at three distances showcasing dynamic rendering, ranging from far to near. It's evident that as the distance gets closer, Mega-NeRF does not enhance rendering precision further, indicating that it has reached its modeling precision limit. **In contrast, UE4-NeRF's multi-level mesh scene representation significantly improves rendering quality. When close to the surface, utilizing high-precision mesh representations.**
Best regards and thank you once again. | null | null | null | null | null | null |
Compositional Abilities Emerge Multiplicatively: Exploring Diffusion Models on a Synthetic Task | Accept (poster) | Summary: The paper studies compositional generation abilities of conditional diffusion models. The main contributions of the paper are the concept graph framework, which is used to examine compositional abilities in a simplified setting, and insights on the learning dynamics of diffusion models.
Strengths: The paper is very well written, motivated, and easy to follow. The problem the authors are investigating is relevant to know the limitations of compositional generation of the current diffusion models.
Weaknesses: Even though the paper provides a very pleasing framework to investigate compositional capabilities of diffusion models through simple and highly controllable interventions, I think the submission is incomplete. This is mainly because the paper considers only a single architecture and does not provide guidelines how insights from concept graph framework could be further studied or provide validation that they hold in a large scale setting. Additionally, I would like to see a summary answering the questions from the abstract: “What are the reasons underlying this behavior? Which concepts does the model generally find difficult to compose to form novel data?”
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. When does the sudden emergence of compositionality occur during training, can this be somehow predicted? Does it happen at the same time for different architectures and training configurations (noise schedules, augmentations etc.)? Providing answers to these questions would be valuable to researchers designing inductive biases to models.
2. How is the critical threshold for learning new capabilities in Fig. 8 determined? Is there a practical way of estimating critical thresholds for learning harmful concepts based only on the given training dataset?
3. Why Fig. 8 results could not be interpreted as if you have even a slight amount of harmful concepts in the dataset, then the model can eventually learn these given sufficient amount of training/capacity? From this point of view even a small amount of harmful samples leads to unwanted behavior and filtering training data is very important.
Minor points on improving the clarity:
4. Potential typo at line 188, “0010” corresponds to a sample in the training set.
5. Fig. 5 duplicate sentence. For clarity Fig. 5 b) and c) would benefit having the same units in the x-axis (either optimization steps or epochs).
6. I would remove the bullet point: “if so, under what circumstances does it fail?” from the beginning of Sec. 4 to Sec. 5 because that is where this question is assessed.
7. Figure 8 lattices are potentially flipped. The lattice of (b) should be the lattice of (a) because in (a) the number of samples from “001” is modified, and vice versa. The lattice (b) is also redundant, I suggest refactoring the figure to contain three subfigures (a) showing the lattice and (b), (c) showing the interventions. Additionally, the x-axis shows the number of samples and the text discusses frequencies. I would switch the x-axis to show frequencies instead of the number of images to improve interpretability in a more general case.
8. Fig. 9 says “In this example, large objects are red, and small objects are blue.”, however in (a) there are large objects in both blue and red.
In general, figures would benefit if they were converted to vector graphics format (pdf, svg etc.) because they need to be zoomed in order to view.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer bYvo,
Thank you so much for carefully recognizing the strengths of our work while providing us with concrete action items to make our submission more impactful and complete. In response, we have intensively experimented over the last week to generate three new plots for you (Figs. R2, R4, R5) and also drafted a new paragraph to summarize the results of our study. With these additions, we have fully addressed your concerns, and hope that you could now recommend this submission for acceptance.
---
**Weaknesses:**
- **Exploration of Architectures [Figure R4]:** Thank you for your great suggestion. In response, we conducted experiments with more diverse architectures and model configurations. Please refer to Figures R4 and R5, and the general response. We will include these new results in the final version of the paper.
- **Lack of Guidelines on Concept Graph Framework [Figure R2]:** We agree that examining generalization to a large-scale setting would be a great way to demonstrate the robustness of our findings. In response, we have now run new experiments with more realistic data from CelebA to further verify our claims. Please refer to Figure R2 and the above general response for new experimental results and discussions. We confirmed that our findings on (i) learning dynamics following the structure of the concept graph and (ii) delayed generalization of minority attribute (gender), hold true, even at larger, more realistic scales. We will include these results in the final version of our paper.
- **Summary Answering the Questions from the Abstract
"What are the reasons underlying this behavior (unpredictable failure in compositional generalization)? Which concepts does the model generally find difficult to compose to form novel data?":**
This is a great suggestion. In response, we have drafted a concluding paragraph to be included in the final draft:
"*In this work, we introduced a concept graph framework and used controlled interventions with a synthetic dataset to formulate and characterize four novel failure modes in the compositional generalization of diffusion models. We first hypothesized and verified that compositional generalization to a concept class far from the training data, based on concept distance, can fail, as it requires more optimization steps due to the sequential nature of generalization (Figure 5). Furthermore, when a particular concept value is underrepresented in the training dataset (e.g., a minority color or gender), avoiding misgeneralization of the concept value, and thus achieving compositional generalization, requires far more additional optimization steps, even after achieving perfect performance on the training set (Figure 6). We also discovered a critical threshold for the inclusion of certain concepts in the training data, which is useful for deliberately making the model fail at learning harmful concepts (Figure 8). Finally, we found that correlations and biases in concept variables can make compositional generalization difficult, and fine-tuning does not provide a solution (Figure 9). Overall, our synthetic data approach allowed us to apply controlled interventions that shed light on four different mechanisms behind the failure of diffusion models in compositional generalization.*"
---
**Questions:**
- **Q1A Can we predict the sudden emergence of compositional abilities?**
Great question! The key challenge in predicting the timing of the sudden emergence of compositional ability lies in tracking the progress during the plateau phase of learning dynamics, where the compositional loss remains flat. Our insight into multiplicative emergence suggests that we can break down the compositional task into individual tasks to make progress on each sub-task visible.
- **Q1B Does the emergence of compositional ability happen at the same time for different architectures and training configurations (noise schedules, augmentations, etc.)?**
As mentioned in the previous paragraph and in our general response, we conducted experiments with more diverse architectures and configurations. As you can see in Figures R4 and R5, the result shows rich phenomena, including how attention mechanisms accelerate compositional generalization.
- **Q2 Response to how critical thresholds in Figure 8 are assigned and can they be practically estimated:**
We compute the thresholds, represented by a dotted line, as the minimum quantity of samples required to achieve non-zero accuracy in distance 2 compositional generalization for “111.” However, making quantitative estimations of critical thresholds for learning harmful concepts solely based on the available training data may present challenges.
- **Q3 Response to re-interpretation of results in Figure 8:**
We share your perspective. To clarify, in lines 328–331, we emphasize that when the quantity of data falls below a certain critical threshold, additional data filtering becomes unnecessary. Another way to perceive this observation is that when the critical threshold is exceptionally low, a more rigorous data filtering approach becomes imperative. This alternative interpretation of our finding holds equal significance and we will expand the discussion in the paper to highlight the same.
---
**Minor points:**
Thank you so much for making all the suggestions for better presentation in minor points. We will reflect all of your feedback in the final draft.
---
Rebuttal Comment 1.1:
Comment: Thank you for the interesting experiments and thorough answers to my feedback. The new data seems to support the claims of the paper.
I acknowledge that testing your hypotheses using large scale real world data is challenging because controlling the attributes existing in the training data becomes more difficult. I found the experiment on CelebA an important step towards that direction and potential future work may extend your work to this direction. Additionally, the exploration of various diffusion architectures provide valuable insights of the learning dynamics and that the observations are not specific only to the initial model architecture selected in the paper.
In my opinion, these experiments, rewriting the concluding paragraph, and improving the presentation quality of figures improve the submission significantly. I am happy to update my score accordingly. | Summary: This paper proposes to empirically studied how compositional structure emerges in diffusion model. The paper proposes the abstraction of concept graphs, and illustrates how diffusion models first learn to fit the training data before compositionally generalizing. The paper illustrates how diffusion models have difficulty disentangling data and modeling low data concepts
Strengths: - The paper is clearly written and the color coding through the text and figures greatly helped comprehension of the paper
- I enjoyed the introduction of concept graphs and the illustration of compositional distance with respect to graph
- The analysis in the paper is quite enlightening, illustrating how diffusion models first fit in distribution before generalizing out of distribution, as well as its struggles on finding confounding factors between variable and low data variables.
Weaknesses: - The evaluation setting is rather simple and focuses on a single synthetic dataset with 3 classes of attributes
- It would be good to study generalization both to more complex images (for instance photorealistic synthetic scenes rendered using something like Kubric) as well as to more factors larger than 3
- In practice, a lot of concepts in the world follow the Zipf distribution -- it would be interesting to analyze that
- It would also be interesting to see how syntatic form would effect compositional generalization
- Some theoretical analysis on compositional generalization would also be interesting
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer wac1,
We sincerely appreciate the time and effort you invested in your thorough and insightful review of our submission. Your recognition of the paper's well-written aspects, alongside your positive remarks on the analysis and our approach to understanding diffusion models, has greatly encouraged us.
We are equally grateful for your constructive suggestions to perform new experiments with (i) more than 3 attributes and (ii) more realistic images. Over the past week, we have dedicated ourselves to conducting those experiments and prepared two new plots (Figs. R1, R2) to directly address your feedback.
With these improvements, we believe we have successfully addressed your concerns. We hope that you are satisfied with the revisions, and that this will strengthen your support for the acceptance of this submission.
Below, we address your remaining questions and concerns in an effort to provide clarity on any remaining issues.
**Simplicity of Evaluation Setting:**
We emphasize that to test our hypotheses in a precise setup, we deliberately chose a simplified data-generating process. However, we agree that extending the results to more complex scenarios is valuable. In response, we have conducted additional experiments on a more complicated synthetic dataset with four attributes. Please refer to Figure R1 and the general response for the experimental results and observations; we found successful generalization of our claims in this setting as well. We will include these results in the revised manuscript to reflect your feedback.
**Generalization to More Complex Images:**
We agree that examining generalization to more complex and realistic images would be a great way to demonstrate the robustness of our findings. Accordingly, we have now run new experiments with more realistic data from CelebA to further verify our claims. Please refer to the above general response for the experimental results and observations. We confirmed that our findings on (i) learning dynamics following the structure of the concept and (ii) delayed generalization of minority color, hold true, even at larger, more realistic scales.
**Analysis of Zipf Distribution and Role of Syntactic Form:**
We find both these ideas to be very valuable and are keen to explore them further. Currently, we focus only on the goals of demonstrating that compositionality is indeed captured by modern generative models, that it likely underlies some emergent phenomena seen in their training, and the consequences of such sudden learning behaviors when biased datasets (i.e., correlated concepts) are used.
**Theoretical Analysis on Compositional Generalization:**
We emphasize that theoretical characterization of compositional generalization is extremely lacking in contemporary literature; in fact, there is little consensus on basic definitions of what compositionality means, as we note in the paper. By providing a formal framework and defining what we mean by compositionality, we do believe we have taken a step towards making a formal analysis amenable and are keen to follow up on this direction. Currently, we believe that pursuing this would divert attention from the primary motivation of the paper, and we therefore leave this for future work.
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: I thank the reviewer for their rebuttal response -- I will maintain my current score as I think the paper would be strengthened much more if a different dataset other than CelebA is considered (for instance attributes of CLEVR or rendered Kubric datasets). The CelebA dataset is very biased and its difficult to access the accuracy in which concepts are correctly generated. | Summary: The authors try to understand the compositionality aspects of generative models by training a conditional diffusion model in a toy setting on synthetic data. They show that the models indeed learn to be compositional if we train longer. They also hypothesize that the sudden emergence of compositionality in the later parts of training is due to multiplicity of how the model learns the underlying factors. They also show that it is hard to disentangle and generalize to new concepts if the underlying factors are highly correlated.
Strengths: The paper is easy to follow. The study of compositionality in diffusion models is a crucial problem to understand. The authors studied it in a controlled setting which is novel.
Weaknesses: The authors chose a very simple synthetic dataset to study this problem. The dataset has only 3 attributes. Even though some of the observations make sense, we have no guarantees that any of the results will extend to the real world.
The figures are quite confusing. Especially the colors. Blue (albeit a different shade), was used as an attribute and to denote the training data.
In Section 4, the authors used a bit of sensationalist language, such as “sudden emergence” etc, while such terms are not defined nor appropriately cited. They can improve the writing in this section.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Did you try or have results with synthetic data with > 3 attributes?
2. Why did you not use synthetic data with real objects instead of colors and shapes? Verifying your claims on even a small real-object dataset would make the paper much more robust and useful.
3. I might have missed this detail, what is the size of the test set, did you look at only the “pink” examples in the figures?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer L63E,
Thank you for the insightful review of our paper. We are pleased that you found our research question crucial, our approach novel, and our writing easy to follow. Your specific and constructive recommendations to conduct additional experiments with more than three attributes and more realistic images were greatly appreciated. In the past week, we've committed to further experimentation and created two additional plots (Figs. R1, R2) in response to your feedback.
We believe these improvements adequately address your concerns. We hope that you will find the revisions satisfactory and that this will solidify your support for the acceptance of our submission.
**Question on Synthetic Dataset with >3 Attributes:** We emphasize that to test our hypotheses in a precise setup, we deliberately chose a simplified data-generating process. However, we agree that extending the results to more complex scenarios is valuable. We have accordingly conducted additional experiments with synthetic data with four attributes, adding “background color” as a new attribute, and have found our claims successfully generalize in this setup as well. Please refer to Figure R1 and the general response for the new experimental results and discussions. We will include these new results in the final version of the paper.
**Question on Using Real Data:** We agree that examining generalization to more complex and realistic images would be a great way to demonstrate the robustness of our findings. Accordingly, we have now run new experiments with more realistic data from CelebA to further verify our claims. Please refer to Figure R2 and the above general response for new experimental results and discussions. We confirmed that our findings on (i) learning dynamics following the structure of the concept and (ii) delayed generalization of minority attribute (gender), hold true, even at larger, more realistic scales. We will include these results in the final version of our paper.
**Question on the size of the test set and whether only “pink” nodes from concept graphs are evaluated:** As noted in Appendix A.5, we generate 50 novel samples via the diffusion model per target concept class, i.e., the class defined by combinations of different concept values, and report the accuracy of probes trained to predict whether a sample contains all underlying concept values that define it. Given 8 nodes, this yields an evaluation of 400 samples per concept graph. That is, the ability to generate samples from both in-distribution (shown as blue nodes in figures) and out-of-distribution (shown as pink nodes in figures) concept classes is evaluated.
We also note that the “pink” nodes in the figures denote out-of-distribution concept classes, i.e., the model has never seen a sample that comes from that distribution during training. Evaluating the model’s ability to generate samples from such out-of-distribution classes allows us to show that a model can learn to produce samples from classes entirely unseen during training–a primary goal of our work. However, the evaluation does encompass the blue nodes as well (shown as blue lines in learning dynamics plots).
**Clarification on Figures and Colors:**
Thank you for this feedback! We will increase the diversity of colors used to represent different entities to reduce any possible confusion.
**Concerning the use of the term “sudden emergence” in Section 4:**
While we understand the criticism, we stress that the form of sudden generalization wherein a model does exhibit a sharp improvement in performance on a task (herein, generating samples from an out-of-distribution class) is unlike any standard notion of generalization we are taught in classical machine learning. Accordingly, we do firmly believe that a new technical term needs to be devised. To remain consistent with contemporary work [1], we currently use the term “emergence,” but are happy to use a more appropriate term if the reviewer has suggestions. We also emphasize that the use of “emergence” has a long history in related fields: e.g., in cognitive science, where emergence is often used to refer to sharp improvements in a child’s capabilities [2].
[1] Wei et al., 2022. Emergent Abilities of Large Language Models.
[2] Spelke, E.S. and Kinzler, K.D., 2007. Core knowledge. Developmental science.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you for the response. The rebuttal answered my questions. I am updating my score. | Summary: This work proposes a framework for studying the compositional generalisation abilities of diffusion models (or generative models more broadly). To that end it introduces the notion of a concept graph which the authors use to manipulate simple synthetic datasets. These concept graphs arrange different combinations of concept values and variables in a way that perseveres a notion of distance between different combinations (based on bit flips). By studying how performance changes as models are required to reconstruct concept value combinations that are further apart from the training data, they can draw inferences regarding the emergence of generalisation capabilities in diffusion models.
Strengths: 1. The article is very well written and easy to follow. The idea of the concept graph and the manipulation of the dataset is clearly explained and well motivated.
2. The use of a hypothesis driven research as opposed to the standard benchmarking approach is welcomed change compared to most of the research in AI. That the way the authors present it makes it seem as particularly novel is more a reflection on the field than this particular work (but see later on some suggestions regarding this point).
3. The point correlations between concept value make learning difficult for the models is important since it highlights how much model still rely on correlations in spite of the fact of their superior generative capabilities.
Weaknesses: There are no substantial weaknesses, but there are some errors and omissions when discussing the literature which I will discuss below.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Is Multiplicative Emergence unsurprising when the authors are forcing accuracy to be 0 if at least one concept value is incorrect during out-of-distribution generation?
2. I find it weird that colour takes longer to learn than the other properties. Since the reconstruction error must be very high even if the shape and scale are correct, I would expect cooler to be learned quicker. Do the authors have some insight into why this happens?
3. Is the relation to grokking really that strong? In grokking, models still exhibit high validation error even if the test error is high. But here
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. It is incorrect to characterise some of the work discussed in lines 120 and 121 as merely performing benchmarking. Indeed at least [94], [95] and [99] perform manipulations of the training and dataset in much the same way as this present work. Thus, while I definitely like that the authors took a more hypothesis driven approach to their work, it is not by any means the case that this is has not been done before. Other works that do this include [1] and [2]. While these may not be based on generative models, they are still relevant examples of the kind of study that the authors are advocating for.
2. Relatedly, the authors should discuss the relation of their work with [3] and [4]. Unlike the previous references, these do involve compositional generalisation capabilities of generative models. The former also provides one characterisation of compositional generalisation in generative models that is similar to the one used in this work. Thus it is indeed not the case that there have been no previous proposals and that [101] is the only noteworthy attempt (which by its own admission, just summarises previous definitions anyway).
3. The latter uses an approach based on transforming input images which allows it to identify the concepts that vary between images in a way that resembles the example presented in Figure 2, panel b. Their results show a similar pattern as the one in this work where combinations that are close to the training data are easier for the models. I believe that both this work and the previously discussed ones deserve a slightly deeper discussion.
4. Lines 291 and 292 are very close to each other which makes the caption of Figure 7 and the main text hard to distinguish.
[1] Hermann, K., Chen, T., & Kornblith, S. (2020). The origins and prevalence of texture bias in convolutional neural networks. Advances in Neural Information Processing Systems, 33, 19000-19015.
[2] Geirhos, R., Jacobsen, J. H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., & Wichmann, F. A. (2020). Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11), 665-673.
[3] Montero, M. L., Ludwig, C. J., Costa, R. P., Malhotra, G., & Bowers, J. (2020, October). The role of disentanglement in generalisation. In International Conference on Learning Representations.
[4] Montero, M., Bowers, J., Ponte Costa, R., Ludwig, C., & Malhotra, G. (2022). Lost in Latent Space: Examining failures of disentangled models at combinatorial generalisation. Advances in Neural Information Processing Systems, 35, 10136-10149.
** note that I used the same citation numbers the authors used for the other references i.e. [95] above corresponds to reference [95] in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer kruN:
Thank you for your positive response! We are delighted that you find our paper "well written and easy to follow", the introduced ideas of the concept graph "clearly explained and well motivated", and the use of a hypothesis-driven approach "a welcome change to AI research." Below, we address your raised questions and comments.
**Question on the role of the evaluation measure:** Great question! As we note in the appendix, defining a discrete measure that rewards correct concept values is intuitively more appropriate because, e.g., claiming a model can produce "avocado chairs" only makes sense if the generated image both looks like an avocado *and* a chair. However, to show our claims also hold for a more continuous measure, we have added a plot showing the progress of cross-entropy between ground-truth concept values and the predicted values via our probes (please see Figure R3). Even with this continuous measure, we observe the sudden emergence, as shown in Figure R3(a). This result is attributable to the multiplicative nature of our task setting, as highlighted in Figure R3(b).
**Question on delayed learning of color:** Our hypothesis is that delayed learning of the capability to manipulate color is due to, in our setup, majority samples in a randomly sampled batch of data possess the same color, i.e., red. Consequently, the model quickly learns to produce the majority color, and then first learns to perform well on producing other concepts, before learning how to produce blue-colored samples, keeping the performance low on generating OOD concepts that require an ability to alter the color values.
**Clarification on relation to grokking:** This is a fair point. The term "grokking" is generally supposed to refer to an inability to perform well on test/validation data well after the training error has reduced to zero. In our scenario, since the model is able to generate samples from within the train distribution, grokking is more "distributional," i.e., the model suddenly learns how to generate samples from unseen distributions. We will ensure this point is further clarified in the paper.
---
**Limitations:**
**Response to Comment 1:** We agree with the reviewer that papers like [94], [95], and [99] perform systematic manipulations of training data for evaluating the compositionality of neural networks. Due to space constraints, we were forced to use a reductive assessment of "benchmarking", but will add a much more detailed discussion to clarify this. For now, we note that our novelty lies in the use of a model-experimental systems approach for evaluating the compositionality of diffusion models, a thorough formalization of the notion of compositionality in this scenario, relating compositional behavior with emergent phenomena seen in modern generative models, and the consequences of these results on learning biases due to correlations in training data. We will also add discussions on the reviewer's cited works on disentanglement–we were well aware of these papers, but again space constraints disallowed a fruitful discussion of them.
**Response to Comments 2 and 3:** Thank you for pointing out these references on disentanglement! While the goals of these papers are sufficiently different from ours (see the answer to L1 above), we will ensure a thorough discussion is included in the final version of the paper. We especially agree that the dataset structure in [3] is similar to ours, though our provided formalization and the precise results are very different.
**Response to Comment 4:** Thank you for this feedback; we will ensure this is addressed in the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Typo: Please replace "cross-entropy" with "probabilistic accuracy"
Comment: Dear Reviewer kruN,
We would like to clarify a typographical error in our earlier rebuttal. In our discussion pertaining to Figure R3, please replace "cross-entropy" with "probabilistic accuracy." Initially, we experimented with cross-entropy loss, but for consistency with the notion of accuracy, we transitioned to this updated measure. We apologize for any confusion and direct you to our general response for further details.
---
Rebuttal Comment 1.2:
Comment: I agree with the issue of novelty when compared to previous work and look forward to the authors updated discussion of previous work and its relation to their own work which I do agree is good addition.
Thanks for the reply and will certainly update my score accordingly. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank all reviewers for their diligent efforts in evaluating our submission. We are pleased by the unanimous recognition and support for our scientific approach, aimed at enhancing the understanding of diffusion models using minimal synthetic data. We would also like to thank the reviewers for the constructive and actionable feedback to further strengthen our claims in more diverse and realistic setups. In the attached one-page PDF, we have incorporated **five** new experimental figures that cover all the requested additions. With these enhancements, we are now confident that our paper will constitute a unique and impactful contribution to the NeurIPS conference.
---
**Summary of new experimental figures in response to reviewers' feedback:**
* **[Figure R1] Sequential Generalization on Concept Graph Holds with 4 (>3) Attributes (L63E, wac1):** We have performed new experiments with 4 attributes. Here, we introduced a new concept of "background color" (ranging from 0 for black to 1 for white) in addition to the three existing concepts of "shape," "object color," and "size." Through this experiment, we confirmed that our hypothesis of “Sequential compositional generalization on concept graphs” generalizes to the case with more than three attributes. Memorization occurs first, as represented by the blue lines. Subsequently, generalization happens in sequence based on the concept distance: Concept distance 1 (depicted by pink lines) comes first, followed by concept distance 2 (red lines). Lastly, concept distance 3, represented by the green line, arises. Overall, our claim holds more generally beyond the concept cube!
* **[Figure R2] Our Observations Hold in More Realistic Dataset (bYvo, L63E, wac1):** To generalize our hypotheses in more realistic settings, we have performed experiments with the CelebA dataset. We have chosen three attributes for our concepts: Gender (categorized as Female and "Male"), Smiling ("Smiling" and Not Smiling), and Hair Color ("Black Hair" and "Blonde Hair"). The corresponding concept graph for these attributes in the CelebA dataset can be found in Figure R2(a). We trained a 3-layer CNN for each individual concept and used the product of accuracies from each concept as the evaluation measure. While not all nodes reached an accuracy of 100% due to insufficient training time, our results consistently support our primary observations:
* **Learning dynamics respect the structure of the concept:** Figure R2(b) illustrates the learning dynamics using the real CelebA dataset. We again observe the sequential pattern in generalization. The accuracy begins to rise at concept distance 1 (denoted by pink lines), followed by concept distance 2 (red line). Notably, the node labeled '011' – even though it's at concept distance 1 – precedes the memorization stage (represented by blue lines). This exception can be attributed to the gender bias present in the training data, which we'll explain below.
* **Delayed Generalization of Minority Attribute:** In Figure R2(c), we plotted the accuracy for the individual concept of gender (i.e., Female and Male). The Female concept class's training curve (represented by red lines) reaches convergence faster compared to the Male concept class (depicted by blue lines). This disparity stems from the Female class being more predominant in the dataset, thereby making the Male class a minority. This observation offers useful insights for practitioners: To mitigate biased outcomes against minority groups, we need to further train the diffusion model beyond its initial convergence on the training dataset.
* **[Figure R3] Emergence Persists with Probabilistic Accuracy Measure (kruN):** In Figure R3, we plot the learning dynamics for the probabilistic accuracy. Here, the probabilistic accuracy is calculated as the product of probabilistic accuracies $p(f(x^{(n)}), v^{(n)})$ from each individual concept, where $n$ is the index of the test sample, $f(\cdot)$ represents the classifier, $x^{(n)}$ is the generated image, and $v^{(n)}$ denotes the actual (ground truth) concept class (i.e., attribute). Using this probabilistic metric, we once again observe the sudden emergence of capability in the test nodes (represented by pink and red lines), especially for node "111" (highlighted by the red line). This confirms that our observation of the emergent capability isn't solely due to the choice of evaluation metric.
* **[Figure R4] Attention Mechanisms Accelerate Compositional Generalization (bYvo):** In Figures R4(a)-(c), we explored three distinct architectures as the backbone of the diffusion model: U-Net without global attention (Figure R4(a)), U-Net with global attention (Figure R4(b)), and the Transformer model (Figure R4(c)). Our observations indicate that the models with attention mechanisms initially exhibit a slower learning curve. However, once they commence learning, there is a notable surge in their accuracy. Specifically, in Figure R4(c), the Transformer's accuracy initially increased slowly but quickly peaked at 100% shortly after.
* **[Figure R5] Our Observations Hold Robustly across Diverse Configurations (bYvo):** Figures R5(a)-(c) show the learning dynamics across different hyperparameter settings for the diffusion model. Specifically, we varied the number of units in U-Net, labeled as "units" in the figures, and the number of time steps in the diffusion model, denoted as "time steps" in the figures. We verified that our findings regarding (i) sequential generalization of capability based on concept distance and (ii) sudden emergence of capability remain consistent across all hyperparameter settings. Based on your insightful recommendations, we are carrying out extensive experiments, e.g., using various noise schedules. However, due to page limitations, we have omitted some of these. Importantly, all the results we have now support our primary observations!
Pdf: /pdf/d9fe0cde9fdc64b5375f09b06d5e579cbb1970fd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Meek Separators and Their Applications in Targeted Causal Discovery | Accept (poster) | Summary: The paper focuses on applications of causal discovery, in which it is not necessary to learn the full graph. Instead, the authors propose to recover what they call the Meek separator---which consists of a set of vertices that decomposes the unoriented edges into smaller connected components when intervened on. Further, they propose two randomized algorithms (for subset search and causal mean matching) for which they prove logarithmic approximation guarantees.
Strengths: - The authors provide strong theoretical guarantees (logarithmic approximation expectation) for both of their proposed randomized algorithms.
- On synthetic data, the proposed algorithms perform well compared to the baselines.
- The code for the experiments is provided.
Weaknesses: - The paper is very dense and hard to follow. Especially, I would appreciate it if more effort was put into motivating the main concepts---i.e., why do we want to learn the Meek separator, and why is the definition reasonable? Figure 1 is supposed to provide an example, but very little intuition is provided to guide the reader through the proposed concepts.
- Although the problem is motivated by learning partial information from gene expression networks and referring to [FMT+21], no real-world example was studied.
Minor feedback:
- In the introduction, it is stated that only the Markov equivalence class can be learned from observational data. This is only partially true. The Markov equivalence class can be learned assuming, e.g., faithfulness and the causal Markov property, but with different assumptions, e.g., assumptions w.r.t. the SCM (linear non-Gaussian additive noise), the DAG is identifiable.
- Line 84: The $\sim$ relation is used to indicate if two vertices are “connected”. I think the notion of “adjacent” is more common since connected could also mean that the path between the vertices has length $> 1$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - It would be great if the authors could provide more intuition as to why the Meek separator is a good set of vertices to learn. For example, if we are interested in the neighborhood of a node, we could also learn the Markov blanket---which advantages does the Meek separator bring with it?
- In Figure 1, why can we ignore all directed edges? The intervention on 2 should only delete the edge $1 \to 2$.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - No real-world examples have been provided.
- To me, it is unclear what exactly the benefits of the proposed Meek separator are, and how it relates to, e.g., Markov blankets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer found our theoretical results to be strong. We would like to address some of the reviewer’s comments below:
> **”motivating the main concepts---i.e., why do we want to learn the Meek separator, and why is the definition reasonable? Figure 1... guide the reader through the proposed concepts.”**
We are motivated to study the Meek separator due to its ability to break down *the remaining unoriented edges* of the DAG into more manageable connected components upon interventions. As emphasized in both the abstract and introduction, the Meek separator discovery algorithm holds promise for enabling divide-and-conquer strategies in solving diverse targeted causal discovery challenges. Specifically, our research showcases its application in crafting approximation algorithms for problems like subset search and mean matching.
Furthermore, this definition is in contrast with the traditional *graph separator*, which decomposes the *full graph* (instead of *the remaining unoriented edges*) into smaller connected components (Definition 1). Note that the graph separator does not make use of the information of interventions. Since we are able to learn some edge orientations after interventions, the set of *unoriented edges* can be much smaller than the set of edges in the *full graph*. Therefore the size of a Meek separator can be much smaller than the size of a graph separator. Figure 1 aims to illustrate this.
In the *PDF attached to the general response,* we provided an updated version of Figure 1 (which will be used in the revised manuscript). In particular, we added detailed explanations of the Meek separator and a separate panel illustrating the graph separator. Figure 1d shows the 1/2-graph separator which contains 2 vertices. Since each pair of vertices is adjacent, removing 1 vertex will leave a connected component of size 4-1=3>2. Therefore any 1/2-graph separator must contain at least 2 vertices. However, Figure 1b shows that there exists a Meek separator that contains only 1 vertex.
> **”no real-world example was studied.”**
We thank the reviewer for this comment. In our experiments, we used synthetic simulations as a means to demonstrate how our algorithm can be useful in real-world applications. These simulations are idealized models for real-world experiments. For example, the task of mean matching can be used as an abstraction of a cell reprogramming experiment, where the shift interventions can be used to model gene over-expression or knockdowns (as discussed in the introduction).
While we acknowledge the importance of real-world experiments, one major challenge lies in the nature of the adaptive policy algorithm, where access to real-world sequential data is not readily available. Similar constraints are present in previous papers in this line of work (e.g., [1-4]), where the evaluations are often based on synthetic data. Moreover, the implementation of these algorithms in real-world applications necessitates close collaboration with individuals engaged in experimentation. Going forward, we are actively considering collaborations with experimentalists or the utilization of semi-synthetic data generated from real experiments. However, much work remains to be done to benchmark these real datasets and thus we consider it out of scope for this work.
> **”In the introduction, it is stated that only the Markov equivalence class can be learned from observational data. This is only partially true. The Markov equivalence class can be learned assuming, e.g., faithfulness and the causal Markov property, but with different assumptions... ”**
We thank the reviewer for pointing this out. In the introduction, we stated that a DAG is *generally* only identifiable up to its MEC with observational data. But we will make this clearer by adding a statement that this result holds in nonparametric SCMs while additional identifiability can be achieved by considering parametric SCMs (e.g., linear SCMs with non-Gaussian additive noises).
> **”Line 84: The ∼ relation is used to indicate if two vertices are “connected”. I think the notion of “adjacent” is more common since connected could also mean that the path between the vertices has length >1.”**
Thank you for this comment. We will revise “connected” to “adjacent” accordingly.
> **”provide more intuition as to why the Meek separator is a good set of vertices to learn... if we are interested in the neighborhood of a node, we could also learn the Markov blanket---which advantages does the Meek separator bring with it?”**
See above for the intuition of why we want to learn the Meek separator and its benefits. Regarding Markov blankets, we do not see a way to make direct comparisons. First, Markov blankets can be learned from the observational data, while our work aims to learn a pre-specified subset of edges from interventional data using the least number of interventions. Second, an identified Markov blanket does not distinguish causes from effects (i.e., edge orientations) [5], while we aim to learn edge orientations in subset search. More importantly, our goal lies not only in learning the neighborhood of a node, but learning an arbitrary set of edges for the subset search problem or a matching intervention for the mean matching problem.
> **”In Figure 1, why can we ignore all directed edges? The intervention on 2 should only delete the edge 1$\rightarrow$ 2.”**
Intervening on a vertex allows us to identify all edges adjacent to it and possibly additional edges given by the Meek rules (Appendix A). See lines 125-127 for details. Therefore in Figure 1b, when we intervene on vertex 2, it allows us to identify the edges $1\rightarrow 2, 2\rightarrow 3, 2\rightarrow 4$ and $1\rightarrow 3, 1\rightarrow 4$ by Meek rules. Then in Figure 1c, we show the connected components of the graph after removing all the oriented edges.
---
All references in this response can be found in the general response.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications! I have read the other reviews and the rebuttal and do not see any major concerns regarding the paper. I hope that the authors incorporate the feedback from the reviewers and aim to improve the readability of the paper to make it more accessible to a broader audience. I adjusted my score to "weak accept".
---
Reply to Comment 1.1.1:
Title: Thank you for the reply and discussions!
Comment: Thank you for the reply and discussions! We will incorporate the feedback into our revision. Let us know if there are any additional questions! | Summary: This paper studies the problem of learning causal structure by learning a minimal intervention set, which is formalized as the Meek separator. The authors show that the Meek separator can orient the maximum number of edges with the minimum intervention set while limiting the size of the remaining undirected connected components to $\alpha |V|$. Furthermore, the authors provide a logarithmic-time algorithm to determine the variables of the Meek separator based on a binary search on the essential graph. Also, the authors show that the problems of subset search and causal matching can be well addressed by the Meek separator.
Strengths: 1. This paper provides a Meek separator and algorithm that only require logarithmic time complexity, which addresses an important problem of causal discovery.
2. The proposed Meek separator can be flexibly applied to subset search and causal matching. The experimental results verify the effectiveness of the proposed algorithms.
3. The proposed theorem looks sound.
Weaknesses: 1. Section 2 is poor readability due to many notions and symbols. I suggest the definition part and related work can be divided into two subsections.
2. I think some examples should be provided for illustrating significant graphical concepts, such as moral graph and essential graph.
3. The result of Lemma 10 is interesting but is not easy to understand. Can you provide an intuitive explanation for it?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
Can your method be directly applied to learn the entire causal graph?Will there be any new challenges?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Refer to Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating the problem we studied, and for your recognition of our result! We’ve added illustrative examples as per your suggestion and we’d like to address your concerns below:
> **”Section 2 is poor readability due to many notions and symbols. I suggest the definition part and related work can be divided into two subsections.”**
Thank you for this feedback. When we originally wrote the definitions in Section 2, it seemed to be a natural place to refer to related work, as these concepts were first defined in those works. However, to improve readability, we will divide this section into subsections describing:
- basic graphical concepts (paragraph 1, lines 81-97),
- graphical concepts relating to DAGs and causal models (paragraph 2, lines 98-116), and
- concepts involving interventions (paragraphs 3-6, lines 117-152).
In addition, we will also add a few illustrative examples describing some key definitions in Appendix B (see our response to the other comment regarding examples below).
> **”I think some examples should be provided for illustrating significant graphical concepts, such as moral graph and essential graph.”**
Thank you for this suggestion! In the *PDF attached to the general response*, we added a few examples illustrating the key graphical concepts. In particular,
- *Figure 3* illustrates a moral graph versus a graph that is not moral. We will add this example to Appendix B and refer to it when moral DAGs are introduced in Section 2.
- *Figure 4* illustrates the essential graph, $\mathcal{I}$-essential graph, and connected components. We will add this example to Appendix B and refer to it when they are defined in Section 2.
- In addition, *Figure 1* now includes a separate panel illustrating the traditional graph separator and detailed explanations of our defined Meek separator. We will replace Figure 1 in the revised manuscript with this.
> **”The result of Lemma 10 is interesting but is not easy to understand. Can you provide an intuitive explanation for it?”**
Thank you for this suggestion! Intuitively, we can explain the result of Lemma 10 below.
Lemma 10 establishes the existence of a subset of at most two vertices that form a Meek separator. It also shows that this subset will satisfy several nice properties. These properties empower us (as depicted in Algorithm 1) to discover such subsets using a binary search method within the vertices of the provided 1/2-clique separator.
To elaborate, the statement of Lemma 10 is three-fold. Firstly, it states that there is a vertex u that is almost central to the graph (i.e., fulfills the **constraint** “$|A_u| ≤ |V(\mathcal{G})|/2$ and $|A_v | > |V (\mathcal{G})|/2$ for all $v ∈ Des(u) \cap K$”). Then, it states that any vertex that fulfills this constraint will satisfy one of the two **conditions** (i.e., “ 1). either $u$ is a sink vertex … 2)…..”). Thirdly, it states that the $u$ (and potentially $x$) stated in the conditions corresponds to Meek separators. Consequently, by finding a vertex that fulfills the constraint, we find the Meek separators by the conditions it satisfies.
See also the general response for an intuitive explanation of its proof.
> **”Can your method be directly applied to learn the entire causal graph? Will there be any new challenges?”**
Yes. Our algorithm for subset search can be directly applied to learn the entire causal graph, by specifying the target edges to be the set of all edges. In this case, we get a $\mathcal{O}(\log n)$ approximation for recovering the entire DAG, which matches the current best approximation ratio in [4]. This discussion can be found in lines 208-209.
---
The references in this response can be found in the general response.
---
Rebuttal 2:
Title: I appreciate the responses from the authors
Comment: I appreciate the responses from the authors. Most of my concerns are addressed, so I will change my score to "Accept". I suggest the authors incorporate the responses to the final version of the paper if accepted.
---
Rebuttal Comment 2.1:
Title: We greatly appreciate your prompt response and the valuable feedback.
Comment: We greatly appreciate your prompt response and the valuable feedback. We will certainly integrate these suggested changes into our revision. Additionally, please feel free to reach out if there are any further questions.
On an additional note, we just want to highlight that we don't see any change in the score from "weak accept" to "accept." Is this something that was anticipated? If so, our sincere apologies for bringing up this matter. | Summary: This work explores the problem of learning the local causal structure from intervention data. Specifically, the authors introduce a novel separator called the $\alpha$-Meek Separator. Unlike traditional $\alpha$-Separators, their separator imposes bounds on the sizes of connected components in a subgraph. The authors also apply the Meek separator to address two typical tasks: subset search and causal matching. Furthermore, they validate the effectiveness of their methods through experiments conducted on synthetic datasets.
Strengths: The paper tackles a very challenging problem.
The meek separator introduced in this paper is novel and interesting.
The applications of the Meek Separator in subset search and causal matching are both significant.
The analysis in this paper is presented in a logical manner.
Weaknesses: Regarding readability: The majority of the article consists of descriptive statements that provide definitions and conclusions but lack simple examples for illustration. In order to help more readers understand and learn, it is recommended to provide some basic examples for key definitions and algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: All theoretical results are based on the assumption that all variables are observable, meaning that there are no hidden variables.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for the encouraging comments! We appreciate that you think our proposal is novel and significant. We’ve added illustrative examples for better readability and we’d like to address your comments here:
> **”Regarding readability: The majority of the article consists of descriptive statements that provide definitions and conclusions but lack simple examples for illustration. In order to help more readers understand and learn, it is recommended to provide some basic examples for key definitions and algorithms.”**
Thank you for this suggestion. In the *PDF attached to the general response*, we added a few examples illustrating the key definitions and algorithms. In particular,
- *Figure 1* now includes detailed explanations of our defined Meek separator and a separate panel illustrating the traditional graph separator. We will replace Figure 1 in the revised manuscript with this.
- We added *Figure 2* to show how Algorithm 1 finds the Meek separator in a 4-vertex DAG example. In this example, we walk through the iterations of Algorithm 1 while specifying the realizations of the selected $u_i$ in line 4 of Algorithm 1. We will add this example to section 4 of the revised manuscript.
- *Figure 3* and *Figure 4* illustrate key graphical concepts that were used, including moral graphs, essential graphs, and connected components. We will add these examples to Appendix B and refer to these figures when they are introduced in Section 2.
> **”All theoretical results are based on the assumption that all variables are observable, meaning that there are no hidden variables.”**
We appreciate this comment regarding our assumptions of no hidden variables. We agree that it would be very interesting to explore settings with latent confounders. We consider this as an important aspect to extend this line of work that often assumes causal sufficiency, and we hope to address this further in future works.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your response, after reading the response and other reviewers' comments, I will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you for the discussion
Comment: Thank you for the discussion! We are happy to address further comments if there are any. | Summary: * The paper provides an algorithm for finding a subset of vertices in a causal graph that, when intervened, can turn undirected edge into smaller connected components for learning a part of the causal graph.
* The proposed algorithm comes with first known average-case provable guarantees for two applications: subset search and causal matching.
Strengths: * The authors propose a randomized algorithm that finds an intervention set of small size that, when intervened, decomposes the remaining undirected edges into connected components of smaller sizes.
* The authors demonstrate the utility of the proposed targeted causal discovery algorithm in the problems of subset search and causal mean matching with analysis to show exponential improvements upon existing results in those domains.
Weaknesses: * It would be advisable to add examples to show how the algorithm finds a Meek separator.
* The paper concerns problems motivated by practical scenarios, but the paper lacks real-world experiments to demonstrate the utility of the proposed algorithm.
* The proofs of the theorems and lemma often omit details. Please see the questions below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Suggestions:
* It would be good to use an example that both illustrates the idea of Meek separator and the limitations of the graph separator in Figure 1. Also, use the same example to show why any alpha-graph separator must contain at least $(1-\alpha) |V|$ vertices.
* Add the description of w(G) in the Lemma 2 like theorem 5.
* Lemma 2 should cite Theorem 1 and Theorem 3 in the reference.
* Defining what “joins” means in Definition 1 would be good.
* In line 272, “the set $u,x$” should have curly brackets around $u, x$. Same for line 277.
* In line 536 of the supplement, $H$ should be $\mathcal{H}$
* In line 575, $V(\mathcal{H}) = V(\mathcal{G}) \setminus Des[u]$ should be $V(\mathcal{H}) \subseteq V(\mathcal{G}) \setminus Des[u]$.
* Questions:
* The intervention set $\mathcal{I}$ is defined as a set of sets right above definition 3, shouldn’t Figure 1b write $\mathcal{I} = \{\{ 2\} \} $?
* In line 267, the authors use $V(\mathcal{H}) \cap V(K) = \emptyset$, in line 270, the authors use $V(\mathcal{H}) \cap K = \emptyset$, what is the difference between $K$ and $V(K)$?
* Is it possible to have different Meek separators where one is preferable to the other per iteration?
* Will the performance of the algorithms be different for soft interventions and hard interventions in the experiment?
* Regarding Lemma 11, how can you conclude intervening on $v$ orients the edge $(c,d)$ without proving $v \in Des[w] \cap Anc[d]$ for some $w \in Anc[c]$ for any $v$ as described by Lemma 18?
* In the proof of Lemma 10, in line 573, it says "It is important to note that u fulfills the conditions specified in the lemma.", can you specify on which condition specifically? Also, in line 574, why does it need to suppose $u$ is a sink vertex of $K$ when $u$ is defined as the last vertex in terms of true ordering within $K$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you so much for your detailed review, and for acknowledging our randomized algorithm! We would like to address your points below:
> **”add examples to show how the algorithm finds a Meek separator.”**
We thank the reviewer for this suggestion. The added examples and detailed explanations can be found in the general response.
> **”lacks real-world experiments to demonstrate the utility of the proposed algorithm.”**
In our paper, we used synthetic simulations as a means to demonstrate how our algorithm can be useful in real-world applications. These simulations serve as idealized models for real-world experiments. For example, the task of mean matching can be used as an abstraction of a cell reprogramming experiment, where the shift interventions can be used to model gene over-expression or knockdowns (as discussed in the introduction).
While we acknowledge the importance of real-world experiments, one major challenge lies in the nature of the adaptive policy algorithm, where access to real-world sequential data is not readily available. Similar constraints are present in previous papers in this line of work (e.g., [1-4]), where the evaluations are often based on synthetic data. Moreover, the implementation of these algorithms in real-world applications necessitates close collaboration with individuals engaged in experimentation. Going forward, we are actively considering collaborations with experimentalists or the utilization of semi-synthetic data generated from real experiments. However, much work remains to be done to benchmark these real datasets and thus we consider it out of scope for this work.
> **”an example that both illustrates the idea of Meek separator and the limitations of the graph separator in Figure 1... why any $\alpha$-graph separator must contain at least $(1-\alpha) |V|$ vertices.”**
We thank the reviewer for this suggestion. In *Figure 1* in the *PDF attached to the general response* (which will replace Figure 1 in the revised manuscript), we added detailed explanations of the Meek separator as well as a separate panel illustrating a graph separator. Figure 1d shows the 1/2-graph separator which contains 2 vertices. Since each pair of vertices is adjacent, removing 1 vertex will leave a connected component of size 4-1=3>2. Therefore any 1/2-graph separator must contain at least 2 vertices.
This example shows a 4-vertex fully connected DAG and a 1/2-graph separator. It can be extended to fully connected DAGs with $|V|$ vertices and $\alpha$-graph separators. Since every pair of vertices in the clique is adjacent, removing $<(1-\alpha)|V|$ vertices will leave a connected component of size $>\alpha |V|$. Thus any $\alpha$-graph separator in this case must contain at least $(1-\alpha)|V|$ vertices. This discussion is provided in lines 162-164.
> **”Add the description of w(G) in the Lemma 2”**
In Lemma 2, we stated $\omega(\mathcal{G})$ as "vertices in its largest clique”. However, we will make this clearer by separating this description into its own sentence.
Thank you for the other suggestions as well:
> **”Lemma 2 should … should be** $V(\mathcal{H})\subset V(\mathcal{G})\subset Des[u]$**.”**
We will add detailed pointers to Theorem 1,3 in the reference in Lemma 3 and change lines 272, 277, 536, and 575 as suggested.
> **”shouldn’t Figure 1b write $\mathcal{I}=${{2}}?”**
Thank you for pointing this out. $\mathcal{I}$ is defined to be the set of sets and therefore it should be $\mathcal{I}$={{2}}.
> **“In line 267, the authors use $V(\mathcal{H})\cap V(K)=\varnothing$, in line 270, the authors use $V(\mathcal{H})\cap K=\varnothing$, what is the difference between $K$ and $V(K)$?”**
We use $K$ to denote the graph and $V(K)$ to denote its vertices. Therefore, it should be $V(\mathcal{H})\cap V(K)=\varnothing$ in line 270. We tried to change all the vertex sets referred to as $V(\cdot)$; however, we missed some. In the revised version, we will change the remaining ones.
> **”different Meek separators where one is preferable to the other?”**
For the current set of applications, the Meek separators are only used to break down a larger graph into smaller, manageable subgraphs. In this context, any Meek separator that can be efficiently computed with minimal interventions provides the same guarantee. Therefore, for our current use cases, we do not find a significant difference between different Meek separators, and they all seem to be equally effective in solving the problems.
> **”performance of the algorithms different for soft interventions and hard interventions in the experiment?”**
No, the performance of the algorithms is not different for soft interventions and hard interventions in our experiment. The property of the interventions that we utilized is to identify additional edges. For both soft and hard interventions, the identifiability of the interventions remains the same, where we can learn the orientations of any edge cut by $I$ and $V\setminus I$ (here $I$ denotes the vertices being intervened and $V$ denotes all the vertices in the graph) and possibly additional edges given by the Meek rules (Appendix A). This discussion can be found in lines 125-127.
> **”Regarding Lemma 11, ..., described by Lemma 18?”**
Thank you for pointing this out. Lemma 18 establishes that the edge $(c,d)$ is orientable through intervention on any vertex within the set $Des[w] \cap Anc[d]$ for some fixed $w \in Anc[c]$. As $w \in Anc[c]$, we have that $Des[c] \subseteq Des[w]$, and therefore $Des[c] \cap Anc[d] \subseteq Des[w] \cap Anc[d]$; consequently as $v \in Des[c] \cap Anc[d]$, we immediately get that $v$ orients the edge $(c,d)$. We will incorporate these lines into our proof to enhance its readability.
> **”In the proof of Lemma 10...”**
We apologize for the confusion! Due to character limits, we gave detailed clarifications in the general response.
---
All references in this response can be found in the general response.
---
Rebuttal Comment 1.1:
Title: Follow-up questions
Comment: I appreciate the authors' response. Does the $\mathcal{I}$ in Figure 2 caption in the newly attached pdf file mean a different thing? Why is it not $\mathcal{I} =$ {{$2$}}?
---
Reply to Comment 1.1.1:
Title: Response to follow-up questions
Comment: Thank you for the quick response! We are happy to clarify the follow-up question:
- $\mathcal{I}$ in Figure 2 caption refers to the set $\mathcal{I}=${$u_i$} returned by the Meek separator algorithm (line 7 of Algorithm 1). For example, here in Figure 2, it returned $\mathcal{I}=${2}, meaning $u_i=2$. Evoking the termination condition of the algorithm (line 6 of Algorithm 1), this node $u_i=2$ corresponds to a Meek separator, i.e., {{2}} is a Meek separator.
We understand that this notation might be a bit confusing with the previous usage of $\mathcal{I}$ as set of set. Therefore, we will revise all places in the paper that use calligraphical notations like $\mathcal{I}$ to set of set. This includes changing Figure 2 of the newly attached pdf to $\mathcal{I}$={{2}} and line 7 of Algorithm 1 to $\mathcal{I}=${{$u_i$}}. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments and suggestions.
---
In this general response, we attached a pdf of the additional figures that we will add to the manuscript. To summarize, this includes:
- A modified *Figure 1,* which now includes detailed explanations of our defined Meek separator and a separate panel illustrating the traditional graph separator. We will replace Figure 1 in the revised manuscript with this.
- *Figure 2,* which shows how Algorithm 1 finds the Meek separator in a 4-vertex DAG example. In this example, we walk through the iterations of Algorithm 1 while specifying realizations of which $u_i$ is picked in line 4 of Algorithm 1. We will add this example to section 4 of the revised manuscript.
- *Figure 3*, which illustrates a moral graph versus a graph that is not moral. We will add this example to Appendix B and reference to it when moral DAGs are introduced in section 2.
- *Figure 4*, which illustrates the essential graph, $\mathcal{I}$-essential graph, and connected components. We will add this example to Appendix B and reference to it when they are defined in section 2.
---
In addition, we’d like to provide further information regarding two common points raised by the reviewesr:
> **How the algorithm finds a Meek separator.**
In the *attached PDF*, we added a 4-vertex DAG example showing how Algorithm 1 finds the Meek separator in *Figure 2*. Further details can be found above and in the figure caption.
Intuitively, Algorithm 1 finds the Meek separator by doing *binary search* among the vertices of the inputted $1/2$-clique separator $K$. Lemma 10 establishes the existence of a subset of at most two vertices that form a Meek separator. It also shows that this subset must satisfy several nice properties. These properties empower us (as depicted in Algorithm 1) to identify such subsets using a binary search method within the vertices of the inputted 1/2-clique separator $K$.
To elaborate, since $K$ is a clique, its vertices have a natural order specified by the topological order of $K$. In each iteration, a vertex $u_i$ of $K$ is intervened, where the remaining vertices of $K$ are separated into $u_i$’s parents and children (i.e., before or after $u_i$ in the topological order). Then the algorithm moves on to search in the interval of the topological order ($K_i$ in Algorithm 1) that can potentially contain the Meek separator. This binary search procedure outputs a subset of vertices that satisfy the conditions of Lemma 10, which is guaranteed to form a Meek separator.
>**Statement, proof, and usefulness of Lemma 10.**
The statement of Lemma 10 is three-fold. Firstly, it states that there is a vertex u that fulfills the **constraint** “$|A_u| ≤ |V(\mathcal{G})|/2$ and $|A_v | > |V (\mathcal{G})|/2$ for all $v \in Des(u) \cap K$”. Secondly, it states that *any* vertex that fulfills this **constraint** will satisfy one of the two **conditions** “ 1). either $u$ is a sink vertex … 2) …”. Thirdly, it states that the $u$ (and potentially $x$) stated above corresponds to Meek separators.
Based on this statement, the proof proceeds in the following logic:
- Paragraph 1 (line 570-573) is showing that there exists a vertex $u$ that fulfills the **constraint**. Therefore the conditions asked by the Reviewer BZ2Z refer to “$|A_u| ≤ |V(\mathcal{G})|/2$ and $|A_v | > |V (\mathcal{G})|/2$ for all $v\in Des(u) \cap K$”. We will clarify this in the revised version by adding this description to line 573.
- Upon showing that the **constraint** can be fulfilled. We then try to show *any* vertex that fulfills this **constraint** will satisfy one of the two **conditions** “ 1). either $u$ is a sink vertex … 2) …”. Therefore paragraph 2 (line 574-575) starts by discussing if the $u$ that fulfills the constraint satisfies the first of the two conditions (i.e. if it is the sink node). Then paragraph 3 and onwards show that if $u$ does not satisfy condition 1, it will satisfy condition 2. We will clarify this in the revised version by adding these descriptions to line 574 and line 579.
The usefulness of this lemma lies in finding a vertex that satisfies the **constraint** (by using binary search, see response above regarding Algorithm 1). Subsequently, the **conditions** that such vertex satisfy lead to Meek separators. The division of two different conditions is for handling technicalities.
---
We provide the full list of references here:
[1] Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, and Sriram Vishwanath. Learning Causal Graphs with Small Interventions. Advances in Neural Information Processing Systems, 28, 2015.
[2] Kristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Enric Boix-Adserà, and Guy Bresler. Sample Efficient Active Learning of Causal Trees. *Advances in Neural Information Processing Systems*, 32, 2019.
[3] Jiaqi Zhang, Chandler Squires, and Caroline Uhler. Matching a desired causal state via shift interventions. *Advances in Neural Information Processing Systems*, 34:19923– 19934, 2021.
[4] Davin Choo and Kirankumar Shiragur. Subset verification and search algorithms for causal dags. *arXiv preprint arXiv:2301.03180*, 2023
[5] Yu, Kui, Lin Liu, and Jiuyong Li. "Discovering Markov blanket from multiple interventional datasets." *arXiv preprint arXiv:1801.08295* (2018).
Pdf: /pdf/91457f6b8abbae0ad979bac6402878770f6a0e05.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Agnostically Learning Single-Index Models using Omnipredictors | Accept (poster) | Summary: The paper studies the problem of learning SIMs agnostically. The authors proposed an algorithm achieving $B\sqrt{opt}$ $\ell_2$-error under mild distributional assumptions. Their main contributions are twofold: 1) they linked the $\ell_2$ loss of a bi-lipschitz activation with its matching loss. 2) they proposed an algorithm that minimizes the matching loss for all Lipschitz activations. They also showed stronger guarantees for logistic regression and argued that the dependence of norm $B$ is inevitable in the final $\ell_2$ loss bound achievable for any efficient algorithm.
Strengths: The authors bring in the technique of fenchel duality to study the distortion bound between the $\ell_2$ loss and the matching loss for GLMs. Even though their idea is similar to theorem 5.5 in [GHK+23], the result is original. In addition, the authors proposed a method using omnipredictors to find a predictor that minimizes the matching loss with respect to all 1-Lipschitz activations. This idea is new to the reviewer.
Weaknesses: The result about the $\ell_2$ error of learning sigmoid activation (thm 4.1) seems weak, as there are already similar results in the literature, see for example, thm 3.3 [FCG20], though in [FCG20] they used lightly stronger assumptions (bounded $D_x$), their $\ell_2$ error does not have the logarithm factor. A detailed comparison with related works would be helpful to prove the significance of this paper.
The presentation of this paper lacks clarity and is confusing. There is no clear description of the algorithm proposed by the authors, and the sample complexity and runtime are not clearly stated throughout the paper. There are also a lot of symbols used in the paper that are not defined, to name a few: the term $opt_g$ appeared in the proof of thm 3.1, the $\varepsilon$ appeared in the statement of thm C.2, and the function $l_2$ used in the proof of thm 3.2.
The proofs lack readability. For example, in line 261, it is hard to understand why $opt_g\geq Pr[|w^*\cdot x|\leq 2\lambda B^2]$ without further clarification. It would also be helpful to have more details in the proof of thm 3.2, like the relation between $\epsilon_1$ and $\epsilon_2$, and the statement from line 533 to 537.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Can the authors provide some intuition about multi-accuracy and multi-caliberation? How do they relate to the omnipredictors for SIMs and matching loss? In particular, what exactly is the relation between multi-accuracy and the gradient of the matching loss, as mentioned in line 110?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: I agree with the authors that the potential limitation of this work is the exact dependence of norm $B$ is not justified. It would be interesting future work to find the tightest $\ell_2$ error bound of the SIMs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to thank the anonymous reviewer for their comments.
Regarding the significance of our Theorem 4.1, we stress that our distributional assumption (subgaussian concentration) is significantly milder than assuming the marginal to be bounded (e.g., a bounded marginal is, in particular, doubly exponentially concentrated). In fact, if we assume that the marginal is bounded and follow the same approach through distortion bounds, we obtain the same ($O(\mathrm{opt})$) guarantee as the one in [FCG20] for the sigmoid (our bounds improve from $\tilde{O}(\mathrm{opt})$ to $O(\mathrm{opt})$).
In our presentation, we emphasized subgaussian distributions to highlight that our method can handle broad classes of unbounded distributions.
Moreover, Theorems 4.1 and 4.3 provide new guarantees for the standard algorithm of logistic regression (i.e., minimizing the matching loss corresponding to the logit link) through a simple approach (distortion inequalities). Interestingly, we give the best known guarantees for the case where we only assume subgaussian concentration for the marginal (and no anti-concentration or anti-anti-concentration).
The algorithm we use is the one proposed in [GHK+23], as is implied within our proofs. In particular, the algorithm computes a calibrated and multiaccurate predictor by running a large enough alternating sequence of multiaccuracy and calibration steps (see Algorithm 2 in [GHK+23]). We are going to provide the pseudocode of the algorithm in the final version of the paper as it will indeed be helpful for the reader.
Our results are focused on providing accurate performance guarantees rather than optimizing the sample or time complexities. For this reason as well as for ease of presentation, we decided to not include the exact quantitative bounds on the sample and time complexity of our algorithm (although we do mention that they scale polynomially in all relevant parameters). However, the bounds are implicit in our analysis and we could extract them in case the reviewers believe this is important.
Regarding the question of the reviewer about line 261, if we assume that $g’(s)\not\in [-1,2]$ for any $|s|\le 2\lambda B^2$, we have
$$\mathrm{opt}_g = \mathbb{E}[(g’(w^*\cdot x) - y)^2] \ge
\mathbb{E}[(g’(w^*\cdot x) - y)^2 \|\ |w^*\cdot x| \le 2\lambda B^2] \cdot Pr[|w^*\cdot x| \le 2\lambda B^2] \ge 1 \cdot Pr[|w^*\cdot x| \le 2\lambda B^2],$$
since $y\in [0,1]$.
While multiaccuracy and calibration are indeed relevant to our results, we primarily focus on the use of existing results from the literature of Omniprediction [GHK+23] (where multiaccuracy, multicalibration and calibration notions are presented and studied thoroughly) to acquire bounds for agnostic learning through the idea of distortion inequalities. In particular, [GHK+23] establish a non-trivial connection between fairness and omniprediction, while we focus on establishing a non-trivial connection between omniprediction and classical agnostic learning.
In general, a calibrated predictor $p(x)$ has the property that conditioning $x$ on any of the level sets of $p$ (so that $p(x) = v$), the expected value of $y$ is close to the value of $p(x)$($=v$). A multiaccurate predictor is a predictor that is accurate in expectation with respect to its correlation to any function within a given concept class. A predictor that is calibrated and multiaccurate is an omnipredictor with respect to all matching losses corresponding to Lipschitz links (i.e., it minimizes all these matching losses simultaneously – see also lines 108-110 and Theorem 3.2 where the algorithm essentially computes a calibrated an multiaccurate predictor). Finally, as we mention in lines 103-105, the stationary points of a matching loss (points with zero gradient) correspond to multiaccurate predictors (see also Theorem 5.6 of [GHK+23]).
---
Rebuttal Comment 1.1:
Comment: I think the authors addressed my questions properly. I think agnosticly learning SIMs is an interesting problem and I would like to see this paper published after refinement. I have changed my grade from 4 to 6. | Summary: The paper gives an efficient algorithm for learning Single Index Models with arbitrary monotone and Lipschitz function under the condition the marginal distribution of x has bounded variance in all directions.
The error guarantee of the algorithm is $O(B \sqrt{\lambda} \sqrt{\mathrm{opt}})+\epsilon$ ($B=\|w\|_2$ and $\lambda$ is the bound on variance) which is weaker than some previous result but this paper also has less assumption on the marginal distribution.
The authors give an SQ lower bound justifying the dependence of their error on B. However, that lower bound does not imply the error need to have a polynomial dependence on B (i.e. a $\exp (\log^{1/2} B$) dependence should be suffice to not contradict the lower bound).
The high level idea of the algorithm is this:
The authors considers matching loss as the surrogate loss function.
In the agnostic setting, it no longer holds that the matching loss and
Squared loss has the same minimizer.
Instead the authors observes that there is a “distortion bound” between matching loss (for bi-Lipschitz function $u$) and squared loss (lemma 2.2).
Namely, this means for any prediction $p$ and true value $y$, the matching loss is at most off by some offset (which is the matching loss between $y$ and itself) and then a multiplicative factor (which depends on the bi-Lipschitz parameter of $u$).
This implies small matching loss implies small squared loss.
(Up to some $O(\mathrm{opt})$ error.)
Then the argument follows that in order to learn a SIM with an unknown activation function $u$.
We can use existing result to learn a “omnipredictor” that has small matching loss for any bi-Lipschitz function.
Let $u’$ be nearest bi-Lipschitz function of $u$.
Due to the boundedness, it suffices to consider $u’$ instead of $u$.
This gives their algorithm for SIM.
Some other result they show (using similar idea) are (ignoring the dependence on $B$ in the error):
1. An distribution independent efficient algorithm for leaning GLM up to error $O(\mathrm{opt})$ when the activation function is bi-Lipschitz
2. An efficient algorithm for logistic regression up to $O(\mathrm{opt})$ under some concentration assumption of the distribution
Strengths: The algorithm holds under very mild assumption on the marginal distribution.
The distortion bound the authors prove might also have other interesting applications.
Weaknesses: The main weakness is the authors do not have a lower bound that can match the upper bound result they give.
It would be interesting to see if one can get a better algorithm or prove a tighter lower bound.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: It would be better if the authors can have a paragraph summarizing the high level ideas of the algorithms.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
There is potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to thank the anonymous reviewer for their suggestions and for appreciating our results. The reviewer is right that it would be interesting to have tight results (at least in the statistical query framework). However, our work not only provides the first upper bound for learning SIMs in the agnostic setting, but also demonstrates a link between the literature of omniprediction (which is in turn connected to notions from fairness like calibration and multicalibration) and the problem of learning SIMs, which might be useful in proving better upper or lower bounds in future work.
---
Rebuttal Comment 1.1:
Comment: Thankyou for the clarification. | Summary: This paper studies the learning of Single-Index Models (SIMs) with arbitrary monotone and Lipschitz activations. In SIM model, labeled examples $(x, y)$ are assumed to satisfy $E[y|x] = u^{-1}(w.x)$, where $w$ is an unknown vector, and $u$ is an unknown monotone function (a.k.a. link function). Given IID-drawn samples from an unknown distribution $D$ over $\mathbb{R}^d\times [0,1]$, the learner's objective is to minimize a predefined loss (e.g., square loss) among the class of SIM models. This paper presents a learning algorithm with a square loss $O(\sqrt{opt})+\epsilon$ if $u^{-1}$ is 1-Lipschitz, where $opt$ is the minimum square error among all SIM models with bounded-norm $w$. Moreover, the authors provide related results for standard algorithms like GLMtron and logistic regression.
Strengths: The paper is solid, and the proof techniques are interesting. This paper considers a harder problem of the non-realizable case ($opt \neq 0$) and unknown activation function. Moreover, the analysis is based on more relaxed conditions compared to prior works as it requires the marginals to have bounded second moments.
Weaknesses: The paper claims (in the title and abstract) to prove agostic learnability. This is misleading! First, agnostic in the PAC learning context implies no distributional assumptions on the labeled samples, but a bounded second moment is assumed in this work.
Second, the agnostic learner has an error $opt+\epsilon$ which is clearly not the case here! The bound in Theorem 3.1 is of the form $O(B\sqrt{opt})+\epsilon$, where $B$ is the bound on $||w||$.
Given that the paper shows "weak learnability" as best.
The paper is well-written in general but there are several typos that need to be corrected:
- The notation for sphere, $\mathbb{S}^{d-1}$, is not defined.
- Line 158: is $f'$ the derivative of $f$?
- Line 169: $d$ is missing: $c:\mathbb{R}\to \mathbb{R}$ need to be $c:\mathbb{R}^d\to \mathbb{R}$
- Line 180: $d$ is also missing.
I think it would be better if Definition 1.4 and 1.5 were written as Assumptions.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Q1. When do we get the $opt+\epsilon$ error?
Q2. What about other loss functions beyond the square error and logistic loss?
Q3. The bound for logistic loss scales with B as $e^{B^2}$, isn't that problematic?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the anonymous reviewer for their comments and for appreciating our work.
You make the point in your review that ‘agnostic learning’ should mean ‘distribution-free agnostic learning,’ as the term was defined in the 1994 Kearns Schapire paper that originally defined the model. We would like to stress that the way we (and others) use the term ‘agnostic learning’ to mean ‘distribution-specific agnostic learning’ is now standard and has widely appeared as such in the literature over the past two decades. More concretely, the two concerns of the reviewer about our use of “agnostic learning” were the following 1) we make an assumption on the marginal distribution of the features (note that we do not make such an assumption on the labels) and 2) the form of the error guarantee. Both of these restrictions are now common in the (classical and recent) agnostic learning literature. For example, see Theorem 3 in the classical paper [1] on agnostic learning of halfspaces (note that the algorithm is still called an agnostic learner) and the more recent paper [2] where they propose an agnostic learning algorithm for a single ReLU neuron with guarantee $\mathrm{poly}(\mathrm{opt})$, under some assumptions about the marginal distribution. Note that both of these papers also use the term agnostic learning in their title. The term is generally used to emphasize that the learner is agnostic to how the *labels* are generated and is not tied to a specific learning scenario.
We also thank the reviewer for pointing out some typos, which we will fix in future versions of the paper.
Regarding the questions posed:
Q1. There have been (statistical query) lower bounds in prior work (see lines 127-130) that provide evidence for the hardness of learning up to $\mathrm{opt} + \epsilon$, even when the activation is fixed to be a ReLU and the marginal is the standard Gaussian. Although we have already provided pointers to such lower bounds, we will exploit the additional space provided for the camera-ready version and add more details in Section 1.1.
Q2. We focus on the squared error, which is a standard loss function used in learning theory and provides results comparable to prior work in the field (e.g., [2]). However, our technique is general; our analysis is based on the idea of translating bounds with respect to matching losses to results with respect to the squared loss and it is conceivable that we could obtain results even when we substitute the squared loss with some other loss function of interest, by proving corresponding distortion bounds. We note that this might entail some technical complications, as the squared loss is usually more analytically convenient than other losses.
Q3. The bound we provide for the squared error of the logistic regression algorithm scales indeed exponentially with the bound on the norm of the parameter vector. However, please keep in mind:
1. Some dependence on the norm is expected (since we do not make any anti-concentration assumptions on the marginal) and the result is useful when the norm bound is constant.
2. Prior work contains bounds that are quantitatively similar (with even more assumptions taken). In Theorem 3.3 of FCG [2], where it is assumed that the marginal distribution is in fact bounded, the dependence on the norm bound is also exponential (since their parameter $\gamma$ in their Assumption 3.1 would decay exponentially with $\rho$ when $\sigma$ is the sigmoid). Note that our method can also yield an $O(\mathrm{opt})$ result for learning the sigmoid if we assume the marginal to be bounded.
3. Our result demonstrates that a standard algorithm (minimizing logistic loss) achieves guarantees that are state-of-the-art at least in some regime.
4. We once more use our simple analysis approach based on distortion bounds.
[1] Kalai, A.T., Klivans, A.R., Mansour, Y., & Servedio, R.A. (2005). Agnostically learning halfspaces. FOCS 2005, SICOMP 2008.
[2] Frei, Spencer, Yuan Cao, and Quanquan Gu. Agnostic learning of a single neuron with gradient descent. NeurIPS 2022. | Summary: This work studies the problem of agnostically learning single index models with arbitrary Monotone and Lipschitz activations. Compared prior work, this work establishes the existence of an learning algorithm under more relaxed assumptions. This work is based on recent work by Gopalan et al. [2023] on Omniprediction using predictors satisfying calibrated multiaccuracy.
Strengths: This paper studies a difficult but realistic setting, and the topic of agonistically learning single index model is very interesting and important in itself.
Weaknesses: I find this paper a bit difficult to read for general researchers in the machine learning community, especially those who are not familiar with the related literature.
I also have a (perhaps naive) question about the claims in the abstract and line 33. It seems to claim that this paper devises an "algorithm" that can efficiently learn SIM in an agnostic setting, I wonder what is the algorithm actually? It seems all theoretical results in this paper are all pointing to the existence of such an algorithm, but not an actual algorithm that can be used to estimate it.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: My questions are raised in the previous section.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: This paper may benefit from polishing up the write-up and make it more friendly for general ML researchers or practitioners.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the anonymous reviewer for expressing their concerns about the readability of our paper.
As we stated in the global response, we are planning to make certain modifications which we believe will significantly improve the readability of our paper by researchers of diverse backgrounds. The reason we believe so is that there were no objections on the overall structure of our paper and in particular (after resolving a small number of issues pointed to by the reviewers): The introduction section contains minimal technical formalism and should be accessible by researchers of diverse backgrounds. It contains the definition of our setting, its significance, informal versions of our main results (Theorems 1.1, 1.2, 1.3) as well as several pointers to prior work for further reading (e.g., Section 1.1). The following sections of our main paper contain some technical details, but we view this as a strength rather than a weakness, since it contributes to the precision of our statements, which is important for the problem we consider. That said, Section 1.2 provides the required background for the following sections. Hence, Sections 2, 3, 4 and 5 should also be accessible by a diverse audience, although an in-depth approach might be easier for researchers in the respective area.
Regarding the second question of the reviewer, it is implicit in our proofs that we use the algorithm of Gopalan et al. [2023] for calibrated multiaccuracy (see Section 7.2 of Gopalan et al. [2023]). In particular, the algorithm computes a calibrated and multiaccurate predictor by running a large enough alternating sequence of multiaccuracy and calibration steps (see Algorithm 2 in Gopalan et al. [2023]). The reviewer correctly points out that we should state the algorithm we use explicitly (in pseudocode), which we are planning to do for the camera-ready version of the paper exploiting the additional page allowed for the final version. | Rebuttal 1:
Rebuttal: We wish to thank the anonymous reviewers for their constructive feedback! In this global response, we provide general responses to concerns shared by more than one of the reviewers and we provide more specific answers in the personal responses.
It is true that our upper and lower bounds on the approximation guarantee are not tight, but we are the first to obtain nontrivial bounds in this very general setting– we succeed simultaneously across all monotone Lipschitz activations. We also make the first connection between SIM learning and multiaccuracy/omniprediction. We leave tightening these bounds as an important open problem.
Some of the reviewers expressed some concerns regarding the readability of our paper. Although this was not a concern shared by all of the reviewers, in order to make our work even more accessible to a more diverse audience, we have a concrete plan that consists of valuable but minimal additions and modifications for the final version (exploiting the additional space allowed). In particular, 1) we will provide a larger amount of detail in Section 1.1, so that the scope of our work with respect to the relevant literature is clearer, 2) we will add a description for the algorithm we use (which is based on the algorithm of Gopalan et al. [2023] for calibrated multiaccuracy) in pseudocode and 3) we will correct typos and make clarifications in our proofs according to the reviewers’ suggestions (or any additional suggestions they provide during the discussion period). Overall, we believe that these changes are minimal, but may resolve much of the concerns raised by some of the reviewers, since there were no major objections on the overall structure of our paper or the architecture of our proofs (i.e., hierarchy of claims/lemmas).
One reviewer compared some of our results to the FCG paper “Agnostic Learning of a Single Neuron with Gradient Descent” – please note that the FCG paper has a hidden factor of d inside their O(opt) bound even when the marginal distribution is a spherical Gaussian, yielding a very weak guarantee (additionally they must know the activation $\sigma$ beforehand). Our bounds only depend on the second moment of the marginal distribution. If we make the further assumption that the distribution is bounded as FCG does, then we actually obtain the identical O(opt) guarantee as FCG (we give more comments on FCG below). | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection | Accept (spotlight) | Summary: This paper focuses on the problem of efficient adversarial contrastive learning, the authors propose Robustness-Aware Coreset selection to speed up ACL, and according to the theoretical analysis and experimental results, the proposed framework is effective while does not hurt the performance.
Strengths: 1. The topic is interesting and the proposed method is simple yet effective;
2. The experiments are well-designed and solid. Also, several runs are performed on each task, making it more convincing;
3. The writing is easy to follow;
4. The proofs are well-written and sound.
Weaknesses: 1. According to the experimental results, it still takes much time to run the proposed algorithm. While the authors discussed it in the limitations, it might a challenge for it to be put into practical use;
2. Figure 2,3,4 are a bit hard to read, might need some re-work;
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to the previous section
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discussed limitations in the paper appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your supportive and constructive comments! Please find our replies below.
> 1. [Reply to W1] Thanks for pointing out this challenge!
We conjecture that applying better submodular function optimization methods to solve the objective function of our proposed RCS can further improve efficiency. For example, [1] has shown the greedy algorithm can be further accelerated through lazy evaluations. [2] proposes to efficiently approximate the set function with some relative error when the set function itself is difficult and computationally expensive to calculate. Meanwhile, [2] shows that most methods of maximizing submodular functions are robust against such errors.
Besides, our work is orthogonal to large batch optimization [3] in practice. We conjecture that incorporating our proposed method with the large batch optimization methods would be practical for efficiently learning robust representations using large-scale datasets and large models.
> 2. [Reply to W2] Thanks for the constructive suggestion! We will provide more explanations in the captions of the figures in revisions.
We will add clarifications in the captions: ACL with RCS and DynACL with RCS correspond to the red and orange solid lines, respectively. ACL with Random and DynACL with Random correspond to the blue and green dotted lines, respectively.
*References*
[1] Krause, Andreas, and Daniel Golovin. "Submodular function maximization." Tractability 3.71-104 (2014): 3.\
[2] Golovin, Daniel, and Andreas Krause. "Adaptive submodularity: Theory and applications in active learning and stochastic optimization." Journal of Artificial Intelligence Research 42 (2011): 427-486.\
[3] Large Batch Optimization for Deep Learning: Training BERT in 76 minutes, You et al., ICLR 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the reponses. I will keep my score. | Summary: This paper introduces a robustness-aware coreset selection (RCS) method without requiring label information to speed up adversarial contrastive learning. RCS selects an informative training subset that minimizes the representational divergence (RD) between adversarial and natural data. Theoretically, the authors prove a greedy search algorithm can solve a proxy problem and provide the lower bound of the solution. Empirically, comprehensive experimental results demonstrate RCS can significantly speed up ACL while maintaining robustness transferability. To my knowledge, it is the first effort to apply ACL on the large-scale datasets ImageNet-1K and obtain effective robust representations.
Strengths: (1) The proposed method is reasonable and novel. The authors propose to select subsets guided by the RD between natural data and its adversarial variant. RCS does not need label annotations while the existing related work requires labels during coreset selection, which supports the originality of the proposed method. This paper is the first to obtain robust representations by ACL pre-training on ImageNet-1K efficiently via RCS. I think this paper has adequately cited the related work.
(2) The submission is technically sound. The claims are well supported by theoretical analyses and experimental results. Theoretically, the authors prove solving a proxy problem efficiently via the greedy search can guarantee the optimality of the solution for the original problem. Empirically, the authors apply RCS to speed up ACL and its variant DynACL on various datasets and show that RCS can maintain natural and robust test accuracy on various downstream tasks. Besides, the authors provide extensive results that validate RCS can be applied to accelerate supervised adversarial training including Madry’s and TRADES on CIFAR-10 and ImageNet-1K.
(3) Self-supervised robust pre-training can provide improved robustness transferability without requiring label annotations. However, due to the computational prohibition, ACL methods have not been applied to large-scale datasets previously. This paper solves this important issue and enabled ACL to be conducted on large-scale datasets, tackling a very practical meaningful chanllenge.
(4) This paper is well-organized and well-written. The reviewer can easily follow most of the content.
Weaknesses: Minor comment - although the authors propose to use the greedy search algorithm to efficiently search the coreset, it still needs to consume extra time for CS during robust pre-training. How to further improve the efficiency of ACL and maintain its effectiveness should be an interesting future direction.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations and the possible negative societal impacts of this submission have been adequately discussed by the authors in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your positive and constructive comments!
> [Reply to Weakness] We believe this is an interesting future direction!
We conjecture that applying a better submodular function optimization method to solve the objective function of our proposed RCS can further improve efficiency. For example, [1] has shown the greedy algorithm can be further accelerated through lazy evaluations. [2] proposes to efficiently approximate the set function with some relative error when the set function itself is difficult and computationally expensive to calculate. Meanwhile, [2] shows that most methods of maximizing submodular functions are robust against such errors.
*References*
[1] Krause, Andreas, and Daniel Golovin. "Submodular function maximization." Tractability 3.71-104 (2014): 3.\
[2] Golovin, Daniel, and Andreas Krause. "Adaptive submodularity: Theory and applications in active learning and stochastic optimization." Journal of Artificial Intelligence Research 42 (2011): 427-486. | Summary: This paper proposes a robustness-aware coreset selection (RCS) method, which is applied to accelerate adversarial contrast learning (ACL) in the absence of labeling information. Especially, the coreset searched by RCS minimizes the representation difference between the natural data and their adversarial examples, which is achieved by a greedy search method. And experimental results demonstrate that RCS can indeed speed up ACL without significantly compromising the robustness transferability.
Strengths: 1. This paper is well-written and easy to follow.
2. The coreset searched by RCS is not only small in number but also beneficial in improving the adversarial robustness of representations.
3. Experimental results demonstrate that RCS can indeed speed up ACL without significantly compromising the robustness transferability.
Weaknesses: 1. This paper points out that ACL with RCS trains the model on the previously selected coreset for $\lambda$ epochs, and for every $\lambda$ epochs a new coreset is selected. So, does the value of $\lambda$ affect the effectiveness as well as the efficiency of ACLs? It may be worthwhile for the authors to make further discussion.
2. In the experimental part, it can be found that the coreset searched by RCS without label information achieves better performance compared to the coreset searched by ACS, or even the whole dataset, as shown in Tables 12 and 13. How to explain this interesting phenomenon?
3. The core of this paper is the coreset found by RCS, so the authors should have given a more focused discussion, such as whether it is class-balanced and what the distribution of the subset is.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your positive and constructive comments! Please find our responses below.
> 1. [Reply to W1] A larger $\lambda$ leads to more pre-training time and higher robust and standard test accuracy in downstream tasks.
We pre-trained ResNet-18 on CIFAR-10 via ACL with RCS using $\lambda \in \\{ 10,20,50\\}$, $k=0.2$, and $\omega=100$. Then, we evaluate the performance on CIFAR-10 via SLF.
| | | SLF on CIFAR-10 | SLF on CIFAR-10 |
|---|---|---|---|
| $\lambda$| Pre-training time (hours) | SA (\%) | RA (\%) |
| 10 | 15.6 | 76.31 | 38.17 |
| 20 | 13.0 | 75.96 | 37.21 |
| 50 | 12.2 | 75.87 | 35.54 |
The above table shows that a larger $\lambda$ leads to more pre-training time since the frequency of conducting coreset selection becomes larger.
Meanwhile, a larger $\lambda$ can lead to higher robust and standard test accuracy in downstream tasks. This is because a larger $\lambda$ enables the coreset to be updated more timely to adapt to the latest state of the model and select data points that can help the model to improve its robustness.
> 2. [Reply to W2] Thanks for pointing out this interesting phenomenon! We provide an explanation below.
The phenomenon shown in Table 12-13 is that RCS can speed up Fast-AT and Free-AT while maintaining robust test accuracy. However, RCS sacrifices some natural test accuracy compared to the entire training set. For example, Fast-AT with RCS improves robust test accuracy by 0.02\% while degrading natural test accuracy by 2.73\% compared to Fast-AT on the entire set (Fast-AT-Entire).
Here, we provide an explanation. We empirically find that the RD losses on the CIFAR-10 test set of ResNet-18 trained by Fast-AT-Entire, Fast-AT with ACS, and Fast-AT with RCS are 0.0394, 0.0423, and 0.0375, respectively. It indicates that our proposed RCS, whose objective is to find coresets that help minimize the RD loss, indeed minimizes the RD loss. Note that TRADES [1] proposed to obtain adversarial robustness by penalizing the KL divergence between natural data and its adversarial variant which equals our proposed RD loss. Therefore, our proposed RCS can help maintain and even slightly improve adversarial robustness by selecting coresets that aim to minimize the RD loss.
Besides, our proposed RCS actually utilizes the information of the entire training set during selecting a coreset. In particular, RCS dynamically updates the coreset according to the entire training set and the latest model parameters every $\lambda$ epoch. In this regard, RCS does not use less information compared to the entire training set. Therefore, RCS can achieve the robustness comparable to the entire training set.
> 3. [Reply to W3] Thanks for your great suggestions! We demonstrate that RCS can select a coreset that is closer to the full training set than Random.
Note that the imbalance ratio [2] is the ratio of the sample size of the largest majority class and that of the smallest minority class. Maximum mean discrepancy (MMD) [3] based on the Guassian kernel is a classical measurement of the distance between two distributions.
The left panel of ***Figure G2*** in the "global" file shows that the corset selected by RCS is almost class-balanced since the imbalance ratio of RCS is slightly higher than 1.0. The right panel of ***Figure G2*** in the "global" file shows that RCS yields a lower MMD between the entire training set and the selected coreset compared to Random.
Therefore, our quantitative analysis demonstrates that RCS generates a coreset that is closer to the entire training set than Random.
*References*
[1] Zhang, Hongyang, et al. "Theoretically principled trade-off between robustness and accuracy." International conference on machine learning. PMLR, 2019.\
[2] Ortigosa-Hernández, J., Inza, I., & Lozano, J. A. (2017). Measuring the class-imbalance extent of multi-class problems. Pattern Recognition Letters, 98, 32-38.\
[3] Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., & Smola, A. (2012). A kernel two-sample test. The Journal of Machine Learning Research, 13(1), 723-773.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comprehensive responses and new results. After reading other reviewers' opinions, I decide to raise the score.
---
Reply to Comment 1.1.1:
Title: Thank you for your decision to raising the score
Comment: Dear Reviewer LfDi,
Many thanks for acknowledging our responses & new results.
We very appreciate your decision to raise the score.
We noticed that your initial score is 5.
Do you mean you want to increase your score more than 5?
Thanks and best wishes,\
Authors | Summary: This paper proposes a coreset selection for efficient adversarial self-supervised learning. By selecting a coreset every epoch that can minimize the representation divergence for training, it maintains similar robustness performance despite a learning speed that is more than three times faster.
Strengths: - It is technically sound. The fact that adversarial self-supervised learning (ASL) can be trained more than three times faster using coreset selection is especially remarkable.
- The proposed claims are well supported through theoretical analysis and experiments.
Weaknesses: - The biggest weakness of the proposed method is that there isn't a significant difference compared to random selection. It utilizes more computation than random selection, but the gain in performance is small. It would be nice if the difference in computation compared to random selection could be explained.
- Moreover, the originality of the proposed method seems more like a simple application of applying the RD loss to previous work [1,2] rather than a new combination. If the author could explain this more clearly, I would be willing to revise my score.
- Furthermore, the one-step gradient approximation, warmup, last layer gradients, and adversarial example data approximation also seem to be techniques proposed in previous work, so there seems to be no originality in this regard.
- There seems to be a lack of justification that a coreset with a small representational divergence is sufficient to gain robustness. And there's not enough explanation on how the results can achieve comparable robustness.
- It would be helpful if there were explanations, perhaps utilizing representation visualization, as to what samples become the coreset selection, and why it helps with robustness.
[1] Kilamsetty et al., RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning
[2] Kilamsetty et al., Glister: Generalization based data subset selection for efficient and robust learning.
==After rebuttal
I changed my score to 6.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have well described their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your positive and thoughtful comments! Please find our responses below.
> 1. [Reply to W1] Our RCS obtains **substantial improvement** compared to random selection (Random).
According to Figure 2, we highlight the performance gain of RCS in terms of robustness transferability from CIFAR-10 to STL10. Besides, in Figure 2-4, the solid line (results of RCS) is always far above the dotted line (results of Random), which validates that RCS consistently yields substantial improvement in standard and robust test accuracy compared to Random.
In terms of computation, RCS spends slightly more pre-training time than Random since RCS needs to spend time in coreset selection every $\lambda$ epoch.
|DynACL on CIFAR-10|||ALF on STL10|ALF on STL10|
|---|---|---|---|---|
|Subset fraction $k$|Method|Pre-training time (hours)|SA (\%)|RA (\%)|
|0.05|Random|6.5|52.49|26.20|
|0.05|RCS|7.6 (+1.1)|60.60 (**+8.11**)|32.35 (**+6.15**)|
|0.1|Random|8.3|54.85|27.85|
|0.1|RCS|9.4 (+1.1)|62.79 (**+8.21**)|33.89 (**+6.04**)|
|0.2|Random|11.9|55.86|29.45|
|0.2|RCS|13.1 (+1.2)|63.41 (**+7.55**)|34.76 (**+5.31**)|
> 2. [Reply to W2] Simply applying our proposed RD loss to previous work [1,2] cannot obtain our proposed method.
Simply applying RD loss to previous work [1,2] cannot obtain the objective function of RCS. To obtain a coreset, [1,2] formulates a bi-level optimization where they first obtain the parameters via minimizing the inner loss and then find the coreset via minimizing the outer loss. If we simply replace the outer loss of [1,2] with RD loss, the objective function still does not apply to ACL since the inner loss of [1,2] is a natural loss and requires labels. Therefore, to adapt the CS to speed up ACL, we also utilize the ACL loss as inner loss.
Besides, we cannot simply apply the RD loss to the greedy search algorithm in [1,2] for solving our proposed RCS. Since the objective function of RCS is different from [1,2], we are not sure whether the greedy search is applicable to our proposed RCS with an optimality guarantee. Therefore, we provide Theorem 1 and 2, which are basically based on the mathematical knowledge of monotonicity and $\gamma$-weakly submodularity, to theoretically prove that the algorithm of greedy search can be applied to our proposed RCS with an optimality guarantee. Finally, we proposed our algorithm of RCS via greedy search.
> 3. [Reply to W3] Besides these four tricks, we propose an unique trick to enable efficient RCS on large-scale datasets with limited GPU memory in Appendix B.1.
RCS on large-scale datasets such as ImageNet-1K needs a large amount of GPU memory to calculate and save the gradient for each minibatch of training data on GPU as the first step (Line 5–7 in Algorithm 1). When the GPU memory is limited, there exists an issue that we are unable to save all the gradients on GPU.
To solve this issue, we split the entire training set into several training subsets. Then, we conduct RCS on each training subset and combines the coresets from each training subset together as the final coresets for robust pre-training. In this way, we enable RCS to efficiently search for the coresets from large-scale datasets with limited GPU memory. We apply this trick to the experiments regarding ACL on ImageNet-1K in Section 4.2 and SAT on ImageNet-1K in Section 4.3. We believe this trick can help the implementation of RCS on large-scale datasets in a computational resource-restricted environment.
> 4. [Reply to W4] We show that a lower RD loss (i.e., the KL divergence between natural data and its adversarial variant) on the test set corresponds to better adversarial robustness in Figure 5 (in Appendix).
Here, we copy the results of Figure 5 to help explain.
|ACL on CIFAR-10||SLF on CIFAR-10|SLF on CIFAR-10|
|---|---|---|---|
| | RD loss (lower is better)|SA (\%)|RA (\%)|
|Entire|0.1243|78.87|39.19|
|Random-0.05|0.3357|67.45|22.96|
|RCS-0.05|0.1730|72.56|32.49|
|Random-0.1|0.3094|70.68|27.19|
|RCS-0.1|0.1695|74.67|34.30|
|Random-0.2|0.2333|72.01|29.87|
|RCS-0.2|0.1664|75.96|37.21|
The table shows that our proposed RCS, whose objective is to find coresets that help minimize the RD loss, indeed minimizes the RD loss. Meanwhile, it empirically shows that a lower RD loss corresponds to higher robust test accuracy. Note that TRADES [1] proposed to obtain adversarial robustness by penalizing the KL divergence between natural data and its adversarial variant which equals our proposed RD loss. Therefore, our proposed RCS can help maintain adversarial robustness by selecting coresets that aim to minimize the RD loss.
> 5. [Reply to W5] Thanks for your suggestions! We provide a visualisation analysis as follows.
We count the frequency of each training sample in the CIFAR-10 dataset being selected into the coreset. Then, we visualize the top-5 most-frequently selected (MFS) data and the top-5 least-frequently selected (LFS) data in ***Figure G1*** in the "global" file.
***Figure G1*** shows that, compared to LFS data, MFS data are images whose backgrounds are more complex and are more difficult to be distinguished from the subject. Recent work [3,4] has shown that exempting the representations from the nuisanse style factors such as the background factor can improve robustness against distribution shifts. RCS prefers to select the images of complex backgrounds helps the model learn representations that are independent of the background factors, thus helping maintain robustness against adversarial pertubations.
References\
[1] Kilamsetty et al., RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning\
[2] Kilamsetty et al., Glister: Generalization based data subset selection for efficient and robust learning\
[3] Representation learning via invariant causal mechanisms. Mitrovic et al., ICLR 2021\
[4] Invariant risk minimization. Arjovsky et al., 2020\
[5] Theoretically principled trade-off between robustness and accuracy. Zhang et al., ICML 2019
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their comprehensive response and additional experiments. Most of my concerns have been addressed, so I have decided to increase my score to 6. | Rebuttal 1:
Rebuttal: [**Rebuttal Highlights**]
Many thanks for all reviewers' supportive and constructive comments!
Following reviewers' suggestions, we uploaded extensive ***Figure G1*** and ***Figure G2*** in the "**global**" file to provide a focused discussion of the coreset.
> 1. [For Reviewer **fJND**] In ***Figure G1***, we provide a visualization analysis to interpret why the coreset selected by RCS can help maintain robustness.
We count the frequency of each training sample in the CIFAR-10 dataset being selected into the coreset. Then, we visualize the top-5 most-frequently selected (MFS) data and the top-5 least-frequently selected (LFS) data in ***Figure G1***.
***Figure G1*** shows that, compared to LFS data, MFS data are images whose backgrounds are more complex and are more difficult to be differentiated from the subject. Recent work [1,2] has shown that exempting the representations from the nuisanse style factors such as the background factor can improve the robustness against distribution shifts. RCS prefers to select the images of complex backgrounds helps the model learn representations that are independent of the background factors, thus helping maintain robustness against adversarial pertubations.
> 2. [For Reviewer **LfDi**] In ***Figure G2***, we demonstrate that RCS can select a coreset that is closer to the full training set than Random.
Note that the imbalance ratio [3] is the ratio of the sample size of the largest majority class and that of the smallest minority class. Maximum mean discrepancy (MMD) [4] based on the Guassian kernel is a classical measurement of the distance between two distributions.
The left panel of ***Figure G2*** shows that the corset selected by RCS is almost class-balanced since the imbalance ratio of RCS is slightly higher than 1.0. The right panel of ***Figure G2*** shows that RCS yields a lower MMD between the entire training set and the selected coreset compared to Random. Therefore, our quantitative analysis demonstrates that RCS selects a coreset that is closer to the entire training set than Random.
*References*
[1] Representation learning via invariant causal mechanisms. Mitrovic et al., ICLR 2021.\
[2] Invariant risk minimization. Arjovsky et al., 2020.\
[3] Ortigosa-Hernández, J., Inza, I., & Lozano, J. A. (2017). Measuring the class-imbalance extent of multi-class problems. Pattern Recognition Letters, 98, 32-38.\
[4] Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., & Smola, A. (2012). A kernel two-sample test. The Journal of Machine Learning Research, 13(1), 723-773.
Pdf: /pdf/3a5279b94c038b7dd0359c64f3010f5654b4f146.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds | Accept (poster) | Summary: main contributions include two-folds. For heavy-tailed payoffs, design heavy-tailed linear bandits, derive the variance-dependent T-round regret; In terms of Linear MDPs, instance-dependent K-episode regret is acquired. All paper results substantially depend on Huber loss regression techniques.
Strengths: For heavy-tailed linear bandits, the author proposed how to tune the robustification parameter that balances bias and robustness on the fly.
for linear MDPs with bounded rewards problem, the author employs separate estimation techniques to handle heavy-tailed rewards and transition kernels, i.e, utilizing adaptive Huber regression to estimate heavy-tailed rewards and weighted ridge regression to estimate the expected next-state value functions.
Weaknesses: The derived regrets depend on the feature dimension, the number of rounds, the variance or central moment of the reward at the t-th round. simultaneously. It seems to have worsened the obtained results comparing to the previous research works. the overall regret scales with feature dimension, again restricting this proposed approach to small dimension problem.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Since the author believes that their proposed algorithm is computationally effective, why not provide specific examples to demonstrate the practical application effect of the proposed algorithm?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: the authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review.
### Some corrections of the reviewer
> For heavy-tailed payoffs, design heavy-tailed linear bandits, derive the variance-dependent $T$-round regret;
In the settings where $\epsilon<1$, the variances of reward functions do not exist. And our regret bound actually relies on the $(1+\epsilon)$-central moment $\lbrace \nu_t^{1+\epsilon} \rbrace_{t\in[T]}$. We refer the reviewer to line 48 to 54 for more details.
> For linear MDPs with bounded rewards problem, ...
We consider linear MDPs with heavy-tailed rewards, where the magnitude of reward can be infinite. The reviewer maybe means the cumulative expected rewards are bounded, as in Assumption 2.8.
### Worse results
We respectfully disagree with the reviewer's argument that our work achieves worse results compared with previous works. Since we are the first to establish such regret bounds in the presence of heavy-tailed rewards, none of the previous research works are readily applicable to heavy-tailed settings. In contrast, our results are shown to be minimax optimal, can be used directly to deal with light-tailed rewards by setting $\epsilon=1$ and recover or improve the SOTA results (See also line 69 to 71).
### Computational complexity and experiments
We say an algorithm is computationally efficient if the computational complexity scales polynomially to the parameters of the problem, e.g., $d,H,K$ of the linear MDP. We provide the computational complexity of Heavy-LSVI-UCB (Algorithm 3) here. For the linear MDPs with heavy-tailed rewards defined in Section 2.2, the computational complexity of Heavy-LSVI-UCB is $\tilde{O}(HK\mathcal{R} + d^4|\mathcal{A}|H^3K)$. Here $\tilde{O}(\mathcal{R})$ is the cost of the optimization algorithm for solving adaptive Huber regression in (5.1). Such a complexity allows us to focus on the complexity introduced by the RL algorithm rather than the optimization subroutine for solving adaptive Huber regression. Compared to that of LSVI-UCB++, $\tilde{O}(d^4|\mathcal{A}|H^3K)$, the extra term $\tilde{O}(HK\mathcal{R})$ causes a slightly worse computational time in terms of $K$. This is due to the absence of a closed form solution of adaptive Huber regression in (5.1). Thus extra optimization steps are unavoidable. Nevertheless, since (5.1) is a convex optimization problem and thus can be solved efficiently, we can specialize $\mathcal{R}$ by adopting Nesterov accelerated method, which gives $\mathcal{R}=\tilde{O}(d+d^{-\frac{1-\epsilon}{2(1+\epsilon)}} H^{\frac{1-\epsilon}{2(1+\epsilon)}} K^{\frac{1+2\epsilon}{2(1+\epsilon)}})$. It implies the computational complexity of Heavy-LSVI-UCB is better than that of LSVI-UCB, $\tilde{O}(d^2|\mathcal{A}|HK^2)$, with respect to $K$, thanks to the rare-switching updating policy. We thank the reviewers for raising the question of computational complexity and we will include it in the next revision. We provide the proof in global rebuttal due to lack of space.
As for the experiments, please note that our paper focuses on the theoretical understanding of the heavy-tailed noise in bandits and RL environments. With that being said we are happy to include experimental study in our future versions. We also provide the computational complexity for solving adaptive Huber regression as $\tilde{O}(K^{\frac{1+2\epsilon}{2(1+\epsilon)}})$ in terms of $K$, where the standard Nesterov accelerated method suffices. To address the concerns about the empirically test of our proposed estimator based on adaptive Huber regression, we can refer to [1], which contains an implementation of a similarly designed estimator. It is indeed efficient in numerical studies.
[1] Sun, Qiang, Wen-Xin Zhou, and Jianqing Fan. "Adaptive huber regression." *Journal of the American Statistical Association* 115.529 (2020): 254-265. | Summary: In this paper, reinforcement learning problem is considered in the episodic setting for linear bandits and linear MDPs under heavy-tailed rewards with potentially infinite variance. Based on adaptive Huber regression and optimism in the face of uncertainty principle, the authors propose algorithms that utilize conditional reward variance.
Strengths: The paper looks technically sound, and the presentation of the material seems to be good with clearly stated assumptions and theorem statements.
On the technical side, the use of adaptive Huber regression in linear bandit and linear MDP setting to tackle heavy-tailed reward distributions is an interesting idea.
Weaknesses: In Assumption 2.6, the centralized moments of order $1+\epsilon$ of the random rewards at each step are assumed to be realizable for each state-action pair, which looks highly unrealistic. Also, in addition to the regret bounds, it would be interesting to see empirical evaluations of the proposed algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: a) What happens in the absence of the realizability assumption in Assumption 2.6?
b) The paper [25] seems to be considering heavy-tailed rewards also, but with finite variance. As such, it would be fair to update the last column of Table 1 and 2 based on finiteness of the variance.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No substantial discussion on the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments!
### Realizable central moments assumption
In Assumption 2.6, we assume the $(1+\epsilon)$-central moments of reward functions have linear structure. This assumption is standard in current linear MDP literature where instance-dependent (variance-aware) regrets are achieved (see also Remark 2.7). In the absence of Assumption 2.6, we cannot estimate $(1+\epsilon)$-central moments of reward functions for each state-action pair, which is crucial to achieving such an instance-dependent regret. To be more specific, since adaptive Huber regression needs the knowledge of the $(1+\epsilon)$-central moments, without such a realizable assumption, there is no way to estimate them. And the best we can do is to use the upper bound of the moments $\nu_R^{1+\epsilon}$ instead, which only gives a worst-case regret as in Corollary 6.5. We leave it as future work to propose a computationally efficient algorithm without the knowledge of moments for the heavy-tailed linear bandits.
### Experiments
Note that our paper focuses on the theoretical understanding of the heavy-tailed noise in bandits and RL environments. With that being said we are happy to include experimental study in our future versions. We also provide the computational complexity for solving adaptive Huber regression as $\tilde{O}(K^{\frac{1+2\epsilon}{2(1+\epsilon)}})$ in terms of $K$, where the standard Nesterov accelerated method suffices. To address the concerns about the empirically test of our proposed estimator based on adaptive Huber regression, we can refer to [1], which contains an implementation of a similarly designed estimator. It is indeed efficient in numerical studies.
### Settings of finite-variance rewards
Yes, [25] provided the first variance-aware regret in the presence of finite-variance rewards. However, we focus on the heavy-tailed settings where the variance of the reward functions can be non-existent. Thanks for your suggestions, and we will add some footnotes in Table 1 and 2 in the next revision.
[1] Sun, Qiang, Wen-Xin Zhou, and Jianqing Fan. "Adaptive huber regression." *Journal of the American Statistical Association* 115.529 (2020): 254-265.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the response of the authors on the assumption. The absence of any empirical investigation is still an important weakness in my opinion, even for a primarily theoretical work.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback.
We conducted empirical evaluations of the proposed algorithm for heterogeneous heavy-tailed linear bandits problems, Heavy-OFUL, which can be regarded as a special case of linear MDPs.
Comparisons are made between MENU and TOFU, which give the worst-case optimal regret bound in such settings (See Table 1 in our paper).
To the best of our knowledge, we are the first to address the challenge of heavy-tailed rewards in RL with function approximation, where $\epsilon$ can be less than 1.
Therefore, there are no other algorithms in RL literature that can be compared to us (See Table 2).
Results demonstrate the effectiveness of the proposed algorithm, which further corroborates our theoretical regret bounds.
Since we couldn't upload images of the experiments to OpenReview for the time being, we show the results of our experiment in the table below. And we will find a way to upload images anonymously as soon as possible.
| Algorithms \ Iteration | 1000 | 2000 | 3000 | 4000 | 5000 | 6000 | 7000 | 8000 | 9000 | 10000 |
| ---------------------- | ------ | ------ | ------ | ------ | ------ | ------- | ------- | ------- | ------- | ------- |
| MENU | 160.00 | 332.78 | 513.98 | 692.20 | 860.32 | 1047.81 | 1219.89 | 1401.82 | 1578.78 | 1673.34 |
| TOFU | 179.21 | 362.29 | 544.20 | 728.04 | 910.21 | 1092.74 | 1277.66 | 1460.74 | 1642.73 | 1825.40 |
| Heavy-OFUL | 72.96 | 147.50 | 241.67 | 336.48 | 434.02 | 535.09 | 636.30 | 740.47 | 839.75 | 935.39 |
Comparison of our algorithm (Heavy-OFUL) versus MENU and TOFU in heavy-tailed linear bandits problems (See Definition 2.1) for $1\times10^4$ rounds. We generate 5 independent paths for each algorithm and show the average cumulative regret. The experimental setup is as follows: Let the feature dimension $d = 10$. For the chosen arm $\phi_t \in \mathcal{D}_t$, reward is $R_t = \langle \phi_t, \theta^* \rangle + \varepsilon_t$, where $\theta^* = \mathbf{1}_d / \sqrt{d} \in \mathbb{R}^d$ so that $\|\theta^*\|_2 = 1$. $\varepsilon_t$ is first sampled from a Student's $t$-distribution with degree of freedom $\text{df}=2$, then is multiplied by a scaling factor $\alpha$ such that the central moments of $\varepsilon_t$ in each rounds are different, where $\log(\alpha) \sim \mathrm{Unif}(0,2)$. Note the variance of $\varepsilon_t$ does not exist and we choose $\epsilon=0.99$. Normalization is made to ensure $L=B=1$. | Summary: This paper first addresses Reinforcement learning (RL) with function
approximation in the presence of heavy-tailed noises whose central moment is known. In
general, these online learning problems rely on the self-normalized inequality to construct a
confidence set of optimal parameter . However, the existing self-normalized inequalities
have a magnitude of noise term that is intractable with heavy-tailed noises. The authors
solve this problem by utilizing adaptive Huber regression and deriving a robust selfnormalized
inequality without the noise magnitude term. By using the proposed selfnormalized
inequality, they introduce two algorithms. The first algorithm, HEAVY-OFUL, is
designed for linear bandits and shown to be minimax optimal. Building upon HEAVYOFUL,
they presents HEAVY-LSI-UCB for linear MDPs, which has a better first-order
regret bound than previous works. Furthermore, they provide the minimax lower regret
bound in linear MDPs with heavy-tailed noises, which implies the minimax optimality of
HEAVY-LSI-UCB in the worst case.
Strengths: This paper is the first attempt to deal with heavy-tailed RL with function
approximation.
- (Optimality) The regret bound of HEAVY-OFUL, is minimax optimal in both stochastic and deterministic linear bandits with heavy-tailed rewards. In addition, the regret bound of HEAVY-LSVI-UCB, recovers the previous variance-aware regret in [1] and improves the existing instance-dependent regrets in linear MDPs [1, 2].
- (Originality) To address the heavy-tailed rewards, the novel robust self-normalized
inequality is established.
Weaknesses: - (Computational costs) The first concern is the practical usage of the proposed
algorithms. As the authors noted, the regret analyses of the algorithms inherently
depend on the robust self-normalized inequality (Theorem 3.3), which bounds the
deviation of estimated parameter and the optimal parameter . However, I believe
the fact that is obtained from adaptive Huber regression is problematic since it
requires iterative optimization steps due to the absence of a closed form for (Line 5
of Alg. 1). Indeed, the proposed algorithms (Alg. 2, 3) contain the additional iterative
algorithm (Alg. 1) to ensure their theoretical results, and thus share the intrinsic
drawback of Huber regression, which is the linear computational complexity per
iteration. In particular, HEAVY-LSVI-UCB utilizes the adaptive Huber regression in
order to optimize both rewards and central moments.
- (Absent experiments) There are no experiments supporting the theoretical results of
the proposed algorithms and addressing concerns about computational costs, even in
simple synthetic problems.
- (Assumption) The authors assumed that the central moment of rewards is known.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Since the paper emphasizes the computational contributions of the proposed algorithms comparing with the existing ones, I think empirical results are needed. Can you provide experiments related to this?
- If possible, please discuss computational aspects of the proposed algorithms.
- In my understanding, the essential key in handling heavy-tailed noises is the robust self-normalized inequality (Theorem 3.3). However, it seems that the inequality involves the term, $b$, that requires prior information about central moment. While the closet work [1] suppose similar assumption, I think this is a strong condition. Can this be relaxed?
- In line 219, the authors claim when $\epsilon=1$ and $\nu=0$, $\forall t$, the regret upper bound of HEAVY-OFUL matches the lower bound of K-armed contextual bandit $\Omega(d)$ [3]. Can you reconsider this argument? To my knowledge, the regret bound proposed in
[3] is $\Omega(\sqrt{dT})$, not $\Omega(d)$. Moreover, the settings are different each other in that [3] addresses finite-armed bandits, while this work deals with (possibly) infinite-armed bandits.
[1]. Xiang Li and Qiang Sun. Variance-aware robust reinforcement learning with linear function approximation with heavy-tailed rewards. arXiv preprint arXiv:2303.05606, 2023
[2]. Andrew J Wagenmaker, Yifang Chen, Max Simchowitz, Simon Du, and Kevin Jamieson. First order regret in reinforcement learning with linear function approximation: A robust
estimation approach. In International Conference on Machine Learning, pages 22384–22429. PMLR, 2022.
[3]. Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandits with linear payoff functions. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 208–214. JMLR Workshop and Conference Proceedings, 2011.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: There are no limitations discussed in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for raising the concerns.
### Computational complexity
We say an algorithm is computationally efficient if the computational complexity scales polynomially to the parameters of the problem, e.g., $d,H,K$ of the linear MDP. We provide the computational complexity of Heavy-LSVI-UCB (Algorithm 3) here. For the linear MDPs with heavy-tailed rewards defined in Section 2.2, the computational complexity of Heavy-LSVI-UCB is $\tilde{O}(HK\mathcal{R} + d^4|\mathcal{A}|H^3K)$. Here $\tilde{O}(\mathcal{R})$ is the cost of the optimization algorithm for solving adaptive Huber regression in (5.1). Such a complexity allows us to focus on the complexity introduced by the RL algorithm rather than the optimization subroutine for solving adaptive Huber regression. Compared to that of LSVI-UCB++, $\tilde{O}(d^4|\mathcal{A}|H^3K)$, the extra term $\tilde{O}(HK\mathcal{R})$ causes a slightly worse computational time in terms of $K$. This is due to the absence of a closed form solution of adaptive Huber regression in (5.1). Thus extra optimization steps are unavoidable. Nevertheless, since (5.1) is a convex optimization problem and thus can be solved efficiently, we can specialize $\mathcal{R}$ by adopting Nesterov accelerated method, which gives $\mathcal{R}=\tilde{O}(d+d^{-\frac{1-\epsilon}{2(1+\epsilon)}} H^{\frac{1-\epsilon}{2(1+\epsilon)}} K^{\frac{1+2\epsilon}{2(1+\epsilon)}})$. It implies the computational complexity of Heavy-LSVI-UCB is better than that of LSVI-UCB, $\tilde{O}(d^2|\mathcal{A}|HK^2)$, with respect to $K$, thanks to the rare-switching updating policy. We thank the reviewers for raising the question of computational complexity and we will include it in the next revision. We provide the proof in global rebuttal due to lack of space.
### Experiments
Note that our paper focuses on the theoretical understanding of the heavy-tailed noise in bandits and RL environments. With that being said we are happy to include experimental study in our future versions. We also provide the computational complexity for solving adaptive Huber regression as $\tilde{O}(K^{\frac{1+2\epsilon}{2(1+\epsilon)}})$ in terms of $K$, where the standard Nesterov accelerated method suffices. To address the concerns about the empirically test of our proposed estimator based on adaptive Huber regression, we can refer to [1], which contains an implementation of a similarly designed estimator. It is indeed efficient in numerical studies.
### Assumption that the central moments of rewards are known
We do NOT assume the central moments of rewards are known in heavy-tailed linear MDPs, which is the main difficulty of achieving instance-dependent regret bound. To address this challenge, we use adaptive Huber regression to estimate them (See Section 5.1). It is worth noting that Heavy-LSVI-UCB actually only requires the upper bound of the underlying moments. And see line 189 for more discussion on the moment parameter $b$. Thanks for raising the concern, we will highlight it in the next revision.
### Lower bound of deterministic linear bandits
Please note that we consider deterministic linear bandits in line 219. Thus the central moments of the reward functions vanish, i.e., $\nu_t=0$. And we set $\epsilon=1$ to achieve the $\tilde{O}(d)$ regret. You are right, the settings in [2] are different than ours since they consider finite-armed bandits with light-tailed noise. So their lower bound is not applicable to heavy-tailed linear bandit problems. We will clarify this in the next revision.
In fact, the $\Omega(d)$ lower bound for deterministic linear bandits is straightforward. Consider the decision set $\mathcal{D}=\lbrace e_i \rbrace_{i\in[d]}$, where $e_i$ denotes the $i$-th unit basis in $\mathbb{R}^d$. Each pull of arm can only obtain the information of a single coordinate. Since coefficient $\theta^*$ lies in $d$-dimensional space, even if the rewards are deterministic, $d$ pulls for exploration are unavoidable.
[1] Sun, Qiang, Wen-Xin Zhou, and Jianqing Fan. "Adaptive huber regression." *Journal of the American Statistical Association* 115.529 (2020): 254-265.
[2] Chu, Wei, et al. "Contextual bandits with linear payoff functions." *Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics*. JMLR Workshop and Conference Proceedings, 2011.
---
Rebuttal Comment 1.1:
Comment: Hello reviewer,
The authors addressed your concerns regarding experiments--are you satisfied?
Also it appears there was a misunderstanding regarding moments being known. I concur with the authors' response.
Can you please give an updated opinion? | Summary: The paper considers the problem of linear bandits and linear MDPs, when the noise may be heavy tailed. The main technical tool that they use is the Huber regressor, that enables them to detect extremal noise points that are less informative, and be more robust to these. They show how this regressor can be incorporated into optimistic algorithms to provide sublinear regret bounds.
Strengths: Quality: Claims are sound, and arguments appear to check out. The paper is well-contextualized in the literature.
Significance: Problem is relevant to practitioners, and theoretical tools may be reapplied in similar online problems elsewhere as well.
Originality: The technical work required to incorporate the huber regressor into the optimistic algorithms is nontrivial, and the efforts are appreciated.
Weaknesses: Clarity: While the writing is well-organized, it is quite dense at times and a slog to parse through: for instances, Sections 2.2 and Section 5. It would be useful to move some things into the appendix and instead of encapsulating commentary into remark environments, including them in the main text to introduce more flow that would make it easier for the reader.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: 1. The setup being considered here with heavy-tailed additive noise is different than the situation where the rewards themselves are heavy-tailed. In the current case, the problem is more one of outlier detection to be robust to these extremal noise events. But in the other case, if an arm has a heavy tailed reward, it may be desirable to pull because there is the potential for receiving very high reward from this arm relative to other arms (i.e if an arm like a lottery ticket with low rewards with high probability but immense rewards with small probability).
I am concerned that conflating these two problems may lead to confusion. Could the authors comment on this distinction? And if the authors agree, I would appreciate if the title were changed to accurately describe the setup and the distinction were made clear in the text.
2. Could you comment on whether a different definition of heavy-tailed in terms of tail probabilities
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your being positive to our work. We also thank you for the advice on the organization of the paper. We will make some adjustments to make it easier for reading in the next revision.
### Setup with heavy-tailed additive noise
The assumption with heavy-tailed additive mean-zero noise is standard in online regression settings, e.g., Section 3 of [1]. Let us consider a random variable $Y\in\mathbb{R}$ with the following structure: $Y=\mu+\varepsilon$ with $\mu$ being a constant and $\varepsilon$ being a heavy-tailed mean-zero noise. Then $Y$ itself is heavy-tailed as well with mean $\mu$ since its central moment is bounded. The effort is made to recover the mean $\mu$ in the presence of heavy-tailed noise $\varepsilon$.
In addition, we believe our work has little to do with outlier detection. It is an interesting question whether Huber loss can be utilized in the field of outlier detection. However, it is definitely beyond our work.
You are right, the reward of a lottery ticket is a good example of heavy-tailed distributions. While its magnitude can be extremely large, its mean is supposed to be small. If the agent aims to get a high reward from a single pull of arm, she may choose the arm with a heavy-tailed reward since there is the potential for receiving very high reward from this arm relative to other arms. However, this introduces a different problem from ours. Since our goal is to get the most benefit in the long run, we wish to pull the arm with the maximum expected reward.
### Definition of heavy-tailed distribution in terms of tail probabilities
Good questions. We say a mean-zero random variable $\varepsilon\in\mathbb{R}$ is heavy-tailed if it satisfies $\mathbb{E}[|\varepsilon|^{1+\epsilon}]=\nu^{1+\epsilon}<\infty$ with $\epsilon\in(0,1]$, which implies $\mathbb{P}(|\varepsilon|>x) \le \mathbb{E}[|\varepsilon|^{1+\epsilon}]/x^{1+\epsilon} = \nu^{1+\epsilon}/x^{1+\epsilon}$ for any $x>0$ by Markov's inequality. Symmetrically, if the tail probability of $\varepsilon$ has the form above, we have $\mathbb{E}[|\varepsilon|^{1+\epsilon}] = \int_0^\infty \mathbb{P}(|\varepsilon|>x) \mathrm{d}x = \nu^{1+\epsilon} \int_0^\infty 1/x^{1+\epsilon} \mathrm{d}x = \nu^{1+\epsilon}$. Thus, the definition of heavy-tailed distribution in terms of tail probabilities follows.
[1] Abbasi-Yadkori, Yasin, Dávid Pál, and Csaba Szepesvári. "Improved algorithms for linear stochastic bandits." Advances in neural information processing systems 24 (2011).
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
I have read your review as well as the author rebuttal. It appears your concerns have been addressed, is that correct? | Rebuttal 1:
Rebuttal: ### Proof of computational complexity
First, to compute $\theta_{k-1,h}$ in line 6 of Algorithm 3, we notice the loss function in (5.1) is $\lambda_R$-strongly convex and $(\lambda_R+K/\nu_\mathrm{min}^2)$-smooth, so there are plenty of convex optimization algorithms available. For example, Nesterov accelerated method can be used. The number of iteration of Nesterov's method is ${O}(\sqrt{\beta/\alpha}\log(R^2/\epsilon))$ with one derivation ($O(d)$ operations) per iteration [1]. Here the loss function is supposed to be $\alpha$-strongly convex and $\beta$-smooth. $R$ is the maximum distance of two points and $\epsilon$ is the precision. Thus the total computational cost is $\tilde{O}(HK\mathcal{R})$ with $\mathcal{R} = \tilde{O}(d\sqrt{1+\frac{K}{\lambda_R\nu_\mathrm{min}^2}})=\tilde{O}(d+d^{-\frac{1-\epsilon}{2(1+\epsilon)}} H^{\frac{1-\epsilon}{2(1+\epsilon)}} K^{\frac{1+2\epsilon}{2(1+\epsilon)}})$.
Second, to evaluate the updated action-value function $Q^k_h(s,a)$ in line 10 of Algorithm 3 for a given pair $(s,a)$, we take the minimum over at most $\tilde{O}(dH)$ action-value functions (See Lemma F.8) with $O(d^2)$ operations (Using Sherman-Morrison formula to compute $H_{k-1,h}^{-1}$ and $\Sigma_{k-1,h}^{-1}$) for each function. Thus it takes $\tilde{O}(d^3H)$ to evaluate the updated action-value function. As a result, to compute $\hat w_{k-1,h}$ in line 6 of Algorithm 3, notice $\hat w_{k,h}=\Sigma_{k,h}^{-1} \sum_{i=1}^k \sigma_{i,h}^{-2} \phi_{i,h} V^k_h(s_{i,h+1})$, if $V^k_h$ remains unchanged, we only need to compute the new term $\sigma_{k,h}^{-2} \phi_{k,h} V^k_h(s_{k,h+1})$, which takes $\tilde{O}(d^3|\mathcal{A}|H)$ computational time. Else if $V^k_h$ is updated, we need to recalculate $\lbrace V^k_h(s_{i,h+1}) \rbrace_{i\in[k]}$, which takes $\tilde{O}(d^3|\mathcal{A}|HK)$ computational time. Note the number of updating episode is at most $\tilde{O}(dH)$ and the length of each episode is $H$, so the total computational cost is $\tilde{O}(d^4|\mathcal{A}|H^3K)$.
Last, to take action $a_{k,h}$ in line 19 of Algorithm 3, we need to compute $\lbrace Q^k_h(s_{k,h},a) \rbrace_{a\in\mathcal{A}}$ and take the maximum, which takes $\tilde{O}(d^3|\mathcal{A}|H)$ time, incurring a total cost of $\tilde{O}(d^3|\mathcal{A}|H^2K)$. Finally, combining the total costs above gives the computational complexity of Heavy-LSVI-UCB.
[1] Bubeck, Sébastien. "Convex optimization: Algorithms and complexity." *Foundations and Trends® in Machine Learning* 8.3-4 (2015): 231-357. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Data-Informed Geometric Space Selection | Accept (poster) | Summary: The paper proposes a new end-to-end distance learning method to facilitate solving downstream prediction tasks. Their core idea is to select a subset of geometric spaces from a candidate space sets that include Euclidean, projected sphere and Poincare ball types of spaces, and computes the final distance in the Cartesian product space of the selected spaces. The selector is built by applying the sparsely-gated MOE technique. To improve the stability of the training, some balancing regularization terms are added to the training loss. Performance of the proposed method is assessed by one matrix completion task and one link prediction task using benchmark datasets. Reported results show performance improvement.
Strengths: The proposed idea has some originality (a kind of creative combination of existing ideas) and the proposed algorithm design is reasonable.
Weaknesses: A major weakness is the writing and clarity. First of all, Sections 3.1 and 3.2 are not proposed method, but a short repeat of Section 2 of [34], but in poorer quality. The subsection of “Geometric Space Selector via Sparsely-gated MOE” from line 172 to line 188 is almost a repeat of Section 2.1 of [33] published in ICLR 2017. The design motivations of the two balancing regularisation terms are not clearly explained, especially for l_2 that is not as straightforward as l_1. It is not clear that how to construct different candidate geometric spaces. The authors only mention that there are three types of spaces. How exactly to compute distance using tangent vectors in a geometric space is not explained. More such information in the Embedding section in line 166 would be helpful. It is a pity that in the proposed method section the authors use quite some space for a poor summary of existing knowledge rather than putting good effort to explain well their own proposed method.
It seems that the goal of the research is to boost prediction performance by improving geometric representation learning. Under such goal, experiment section has weakness. It only compares between different spaces, but lacks comparison with published results by mainstream existing works and state of the art results on the same benchmark datasets for both matrix completion and link prediction. Without such comparison, it is hard to recognise the value/importance of geometric representation learning for solving downstream prediction tasks. Ablation studies on the effect of the regularization terms are missing.
The contribution is a little limited given that the only objective seems to boost performance. Under such goal, the direction of geometric representation learning may not even be the best direction to pursue for each prediction task. It would be more interesting if the work could explore other potential, in addition to prediction performance improvement, of geometric representation learning.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: (1) Explain how the two candidate hyperbolic and two spherical spaces were constructed?
(2) In line 236, the authors said that the training examples are formed based on watched (v1) and unobserved (v2) movies. This is odd. In MovieLens data, the matrix elements are ratings like 1,2,3,4,5. Isn’t the matrix completion supposed to predict the ratings of unobserved movies for a user? Why is it meaningful to push s(u,v1) -s(u,v2) +m < = 0?
(3) Can the authors perform a brief investigate on what experiment settings some state-of-the-art works on matrix completion and link prediction have used for the same three benchmark datasets, and what performance they have obtained, and compare the published results with the authors' results?
(4) Perform some ablation studies to examine the role of l_1 and l_2.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Discussion on geometric representation learning is quite basic. It would be interesting to discuss why it is important and useful to research geometric representation learning, and whether the proposed approach has other potentials, in addition to boosting prediction performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the review, especially the constructive suggestion on the writing/clarification. We will revise the manuscript as suggested and clarify some of the confusing points.
##### **For Section 3.1 and section 3.2**
We will move Section 3.1 and Section 3.2 to a separate preliminary section to ensure the paper to be self-contained.
##### **The design motivations of the two balancing regularisation terms are not clearly explained**
- In the first regularization loss (eq-12), the batch-wise sum of the gate values are considered as the importance of an expert. Minimizing the coefficient of variation the importance distribution encourages all experts to have equal importance.
- The second loss (eq-13) is used to encourage all experts to receive roughly equal numbers of training examples. Although one can ensure equal expert importance with the first loss, the number of training examples received by each expert can be different (e.g., one expert receives a few examples with large weights, and the other receives many examples with small weights). Here, $\kappa$ can be viewed as a soft estimator of the number of examples assigned to each expert for a batch of inputs. Minimizing the coefficient of variation this distribution will help encourage an equal number of training examples for each expert.
##### **Explain how the two candidate hyperbolic and two spherical spaces were constructed?**
We will provide a formal description in the paper (e.g., after line 166).
Taking the recommendation task as an example, suppose that we have two candidate hyperbolic spaces ($P_1$, $P_2$) and two spherical spaces ($D_1$, $D_2$) in the candidate space pool.
In the first step, we will initialize four embedding vectors for each item (say $ i_1, i_2, i_3, i_4$) and four embedding vectors for each user ($u_1, u_2, u_3, u_4$), and all vectors are in the tangent space.
Secondly, we assume that $i_1$ and $u_1$ are in the hyperbolic space $P_1$; $i_2$ and $u_2$ are in the hyperbolic space $P_2$; $i_3$ and $u_3$ are in the spherical space $D_1$; $i_4$ and $u_4$ are in the same spherical space $D_2$. The distance of $i_*$ and $u_*$ is computed via eq(16).
Thirdly, if we set $K=2$, and the model selected $P_1$ and $D_2$ for the final prediction. We will combine the distance in $P_1$ (computed by $i_1$ and $u_1$) and distance in $D_2$ (computed by $i_4$ and $u_4$) via equation (11).
##### **How exactly to compute distance using tangent vectors*
Because we have two separate applications, we the distance for each application is defined in different sections. One can get the final distance calculation from eq 2, 11, 16 (for matrix factorization) and 19 (relational link prediction)
##### **Loss for Movielens-1m**
For MovieLens 1M, as we mentioned in line 240, we have binarized the ratings into 0/1, where ratings greater than or equal to 4 are treated as 1, otherwise, it is treated as unknown (0). This is a common setting in matrix factorization/collaborative filtering. As in most real-world applications, feedback is in the form of likes/dislikes, viewed/not viewed, etc. As such, we adopted this contrastive loss function.
To avoid confusion, we will rename it to **matrix factorization** instead of matrix completion.
##### **Compare with STOA**
We provide comparisons with other stoa methods on ML-1M below. The performance boost is still significant.
Our method: Recall@10=0.116;
MultiVAE [a]: Recall@10=0.101;
LightGCN [b]: Recall@10=0.0987;
DiffRec [c]: Recall@10=0.106;
On the relational link prediction task, we have already included 13 baselines. We add one more model here.
HittER (EMNLP 2021, [d]): FB15K-237 (MRR=.373; HR@1=.279; HR@3= .409; HR@10=.558); WN18RR (MRR=.503; HR@1=.462; HR@3=.516; HR@10=.584); We can see that our model still outperforms them.
##### **Apart from performance boosts**
It is worth noting that performance boost is not trivial given the recent advance in these fields. Here, we'd like highlight the two advantages of the proposed method:
- **Enhanced expressiveness**: the proposed approach can enhance the representation power of current geometric representation learning, and performance boost is the best indicator.
- **Flexibility**: the model flexibility is shown in two aspects: (a) the proposed approach is applicable to different domains and applications; (b) each data point has the flexibility to choose from a number of candidate product spaces to represent itself (e.g,. Figure 4 and 5).
##### **Perform some ablation studies to examine the role of l_1 and l_2**
We conduct ablation studies on the WN18RR dataset.
- w/o $\ell_1$ and $\ell_2$, we have MRR: 0.515 | H@1: 0.463| H@3: 0.536| H@10: 0.595
- w/o $\ell_2$ but with $\ell_1$ : MRR: 0.519 | H@1: 0.471 | H@3: 0.540 | H@10: 0.602
- w/o $\ell_1$ but with $\ell_2$: MRR: 0.520 | H@1: 0.473 | H@3: 0.543| H@10: 0.605
##### **why geometric representation learning and the potential of the proposed approach**
Geometric representation learning is fundamental problem and useful in main domains (as reaffirmed by other reviewers). It can introduce a strong inductive bias to the model, making the learning process easier and boosting the expressiveness of the representation. In the future,
we plan to carry forward this concept into more sophisticated deep neural networks, such as hyperbolic neural networks or spherical neural networks. In doing so, more complex tasks can benefit from the proposed data-informed geometric space selection idea.
##### Reference
- [a] Liang, Dawen, et al. "Variational autoencoders for collaborative filtering."
- [b] He, Xiangnan, et al. "Lightgcn: Simplifying and powering graph convolution network for recommendation."
- [c] Wang, Wenjie, et al. "Diffusion Recommender Model." SIGIR (2023).
- [d] Chen, Sanxing, et al. "HittER: Hierarchical Transformers for Knowledge Graph Embeddings." Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the response. It has addressed a good amount of my concerns, but I still have some on experiments. I mentioned that it lacks comparisons with STOA on matrix completion and link prediction, which means to compare with the STOA results on those two tasks, not just limited to embedding techniques. Such comparison helps assess the practical value of the algorithm. The added results are still within the embedding field. So I will raise my rating, but one grade up.
---
Reply to Comment 1.1.1:
Comment: We appreciate that you raise your rating and the opportunity to clarify our work. We want to emphasize that the methods we've listed are not limited to conventional embedding techniques
For instance, the method DiffRec is recently introduced in SIGIR 2023 (July 23–27, 2023, later than the NeurIPS submission deadline). It is a diffusion model based approach and claims to be the state of the art. LightGCN is a graph neural network based approach. MultiVAE is a variational autoencoder approach.
The compared baselines on the relational link prediction task are also not limited to embedding techniques. For instance, ConvE is a convolutional network based approach; M2GNN is a graph neural network based method. HittER is a transformer based method. Here we list two more new methods (although it is not a fair comparison):
- KGTuner [A] (ACL 2022): FB15K-237 (MRR=.352; HR@1=.263; HR@3=.387; HR@10=.530); WN18RR (MRR=0.484; HR@1=0.440; HR@3=.506; HR@10=0.562); This method is based on exhaustive hyper-parameter search. Our method outperform it on all metrics.
- CSProm-KG [B] (ACL 2023, available since 4th July 2023, later than the NeurIPS submission deadline): FB15K-237 (MRR=.358; HR@1=.269; HR@3=.393; HR@10=.538); WN18RR (MRR=0.575; HR@1=0.522; HR@3=.596; HR@10=0.678). This is a concurrent work which uses pretrained large language model (LLM, BERT-Large) as an external source. It is worth noting that WN18RR is sampled from the wordnet lexical database and BERT-Large is trained on a large corpus of lexical data, it is unsurprising that this method can obtain very high score on WN18RR. However, its relatively modest scores on FB15K-237 (sampled from Freebase) suggest limited practicality for non-lexical knowledge graphs. It is not fair to compare an LLM based approach with our approach as using LLM may lead to data leakage.
To address any potential misconceptions, our experimental design was primarily aimed at showcasing the superiority of the proposed approach over existing geometric representation learning methods. We hope that this clarification provides a more accurate understanding of our work and its contributions.
- [A] Yongqi Zhang, Zhanke Zhou, Quanming Yao, and Yong Li. 2022b. Efficient hyper-parameter search for knowledge graph embedding. ACL 2022.
- [B] Chen, Chen, et al. "Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting." Findings of the Association for Computational Linguistics: ACL 2023. 2023. | Summary: The goal of this paper is to learn the geometry (manifold) underlying given data points. Rather than learning an arbitrary Riemannian manifold from the data, the paper models this manifold as a Cartesian product of manifolds with constant curvature (three prototyprs are use: Euclidean, spherical, hyperbolic). It imposes a certain vector space structure (‘gyrovector’) on these prototypes, where the operations are functions of the curvature. This imposition is a key step as it allows the definition of log and exp maps from the manifolds to tangent spaces, and reach vector representations of points on manifolds.
With this geometry, the paper derives a framework to fit a manifold to the training data. Each data point has components in some number of these manifolds. The paper first estimates the probability of each data point having a component in a specific component manifold. They then add a task specific objective function seeks to optimization data representation in terms of maximizing task performance. With these cost functions, the manifold optimization becomes an optimization problem.
The conceptual applications of this framework include matrix completion, and link prediction for relational graphs. There are a number of experimental results demonstrating the ideas and their performance over the past ideas.
Strengths:
-- Improvement over the previous data-driven manifold learning by including curved manifolds as basic components.
-- Interesting use of the gyrovector machinery to derive Euclidean representations for feeding into optimization tools.
-- The experiments provide evidence of success in learning some geometry from the data.
Weaknesses:
-- Can one represent arbitrary manifolds using a direct product of these prototypes? Perhaps not. Then what is lost in posing the problem in this way as opposed to a completely nonparametric approach of manifold learning?
-- The paper can provide some intuition on the overall objective function and some more details on how the optimization is performed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
Minor point:
1. I was just checking the Mobius sum for elements of a unit sphere in R^n – the result was (Y - <X,Y> X)/(1 - <X,Y>), which is not on the unit sphere. Does it mean that the sum leaves the set {\cal M}_c? Please clarify. This is probably my calculation mistake.
2. Also this expression does not seem to be symmetric in X and Y. Is that correct ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
The paper discusses one limitation -- addition of a hyperparameter (K) - -with respect to the previous work on Euclidean product space. However they don't discuss how general their formulation is with respect to an arbitrary data manifold. That is, a manifold with varying curvature.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the positive rating and constructive suggestions!
##### **Can one represent arbitrary manifolds using a direct product of these prototypes?**
We cannot represent arbitrary manifolds in this way. We mainly focus on three popular manifolds (spherical, hyperbolic, and euclidean) which have well-defined projections forms and metrics. Other manifolds such as symplectic manifold are out of the scope in this paper.
It is nontrivial to represent manifold with varying curvature in a uniform form. However, the curvature can be learnable in our model. That is, it can be treated as a hyper-parameter or being a learnable parameter that can be optimized together with the model.
##### **Then what is lost in posing the problem in this way as opposed to a completely nonparametric approach of manifold learning?**
This study centers around parametric learning algorithms, and the highlighted applications also exhibit a preference for parametric approaches over non-parametric ones. To illustrate, numerous works [a, b] in the literature have shown that matrix factorization yields better results compared to non-parametric techniques like KNN-based models. In the future, we plan to carry forward this concept into more sophisticated deep neural networks, such as hyperbolic neural networks or spherical neural networks.
#### **The paper can provide some intuition on the overall objective function and some more details on how the optimization is performed.**
Thank you very much for the suggestion, we will add more intuition/explanations on the objective functions to the paper.
First, the two regularization losses can be viewed as a soft constraints on the space selector.
- In the first regularization loss (eq-12), the batch-wise sum of the gate values are considered as the importance of an expert. Minimizing the coefficient of variation the importance distribution encourages all experts to have equal importance.
- The second loss (eq-13) is used to encourage all experts to receive roughly equal numbers of training examples. Although one can ensure equal expert importance with the first loss, the number of training examples received by each expert can be different (e.g., one expert receives a few examples with large weights, and the other receives many examples with small weights). Here, $\kappa$ can be viewed as a soft estimator of the number of examples assigned to each expert for a batch of inputs. Minimizing the coefficient of variation this distribution will help encourage an equal number of training examples for each expert.
Second, the two task specific losses are contrastive losses which encourage positive pairs to be closer and make the distance between negative entities larger.
For the optimization, since all the tensors are projected via the stereographic projection, we use Adam as the optimizer.
##### **I was just checking the Mobius sum for elements of a unit sphere in R^n – the result was (Y - <X,Y> X)/(1 - <X,Y>), which is not on the unit sphere. Does it mean that the sum leaves the set {\cal M}_c? Please clarify. This is probably my calculation mistake.**
The calculation is correct. However, the mobius sum is defined for the stereographic projection model instead of the original manifold. We provided some examples in Figure 1 and Figure 2 on what the projected model looks like for both hyperbolic and spherical models. We also provided two examples in Figure 3 on the mobius sum calculation results when c=1 and c=-1. See [pdf](https://openreview.net/attachment?id=VzCnW8Uuls&name=pdf).
##### **Also this expression does not seem to be symmetric in X and Y. Is that correct ?**
Yes, the expression is not symmetric. But in some special cases, they are symmetric: (1) zero vector case; e.g., of the vector is zero. (2) zero curvature case that is same as Euclidean addition.
##### Reference
- [a] Hu, Yifan, Yehuda Koren, and Chris Volinsky. "Collaborative filtering for implicit feedback datasets." 2008 Eighth IEEE international conference on data mining. Ieee, 2008.
- [b] Koren, Yehuda. "Factorization meets the neighborhood: a multifaceted collaborative filtering model." Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. 2008. | Summary: Data representation is an important problem in today's deep learning world. Representation beyond Euclidean geometry, such as spherical or hyperbolic spaces can provide additional flexibility and benefits in capturing underlying properties of data. For example, hyperbolic space can better capture data that has inherent hierarchical structure, and spherical space can better model cyclical structure. In this paper, the authors provide a method to automatically map the individual data points to different geometric spaces using a mixture of experts network. By mapping the data points to different geometric spaces automatically, the authors show advantages in many real world tasks such as matrix completion and link prediction for relational graphs.
Strengths: 1) The paper addresses an important problem of automatically mapping the data points to different geometric spaces and motivates the problem well in the paper.
2) This is a challenging problem and many prior methods typically focus on mapping all the data points to a single geometric space. The formalism is also elegant and the proposed solution shows improvement on matrix completion and graph link prediction over other baselines.
Weaknesses: 1) While the matrix completion and link prediction seems like good applications, it is not clear whether the proposed techniques can be extended to some of the other mainstream learning applications in vision and language domains. Furthermore, the early layers in many deep neural networks can already exploit the underlying intrinsic data properties to extract features to show benefits. It is not completely clear whether explicit assignment to individual spaces would provide any additional benefits in some of these newer applications.
2) There has been many prior methods addressing this problem, and the novelty is not explicitly discussed. It would be good if the authors could better clarify this w.r.t MOE and other methods that exploit hybrid geometrical spaces.
3) In many problem settings the underlying data distribution may be predominantly hierarchical or cyclical, and not sure whether it provides strong advantages with the hybrid approaches, and furthermore, the number of geometrical spaces and identification of these geometrical spaces still seem manual and the modern deep learning machinery may be implicitly learning and mapping them as they find them useful in reducing the training loss.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, this is a well written paper addressing an important problem. Please see concerns above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the positive feedback and reconfirming the importance of the explored problem.
##### **Q1**: While the matrix completion and link prediction seems like good applications, it is not clear whether the proposed techniques can be extended to some of the other mainstream learning applications in vision and language domains. Furthermore, the early layers in many deep neural networks can already exploit the underlying intrinsic data properties to extract features to show benefits. It is not completely clear whether explicit assignment to individual spaces would provide any additional benefits in some of these newer applications.
**Answer**: Due to a limited rebuttal time frame, we are not able to conduct experiments on the mentioned applications. But we can provide some thoughts on how to apply this idea on these applications. For example, in current mainstream NLP tasks (e.g., classification, recognizing textual entailment), hyperbolic neural networks [a] (it’s been shown that hyperbolic neural networks can outperform euclidean neural networks[a]), spherical neural works, and normal euclidean neural networks can be built for specific NLP tasks. Then, we can use the sentence representations obtained from pretrained language models (e.g., bert) and the selection network to choose which type of geometric networks to be employed. In doing so, we can also realize the goal of data-informed geometric space selection.
##### **Q2**: There has been many prior methods addressing this problem, and the novelty is not explicitly discussed. It would be good if the authors could better clarify this w.r.t MOE and other methods that exploit hybrid geometrical spaces.
Thank you very much for the constructive suggestion. We will explicitly clarify the relationship of the proposed approach with MOE and existing hybrid geometrical spaces.
The proposed approach is orthogonal to the existing MOE works. Existing works usually apply the MOE paradigm to euclidean neural networks for better performance or model size scaling up. However, to the best of knowledge, this work is the first attempt to seamlessly integrate the sparsely gated MOE learning paradigm into geometric representation learning. The selected spaces are tightly coupled to form a product space that has strict mathematical definition and meanings. Compared with existing hybrid geometric spaces (e.g., product space representation learning), in our approach, each data point can inform the model on which geometric space to use, while existing methods treat every data point equally without any customization. In doing so, the proposed method can fully elicit the expressive power of geometric representation learning.
##### **Q3**: In many problem settings the underlying data distribution may be predominantly hierarchical or cyclical, and not sure whether it provides strong advantages with the hybrid approaches, and furthermore, the number of geometrical spaces and identification of these geometrical spaces still seem manual and the modern deep learning machinery may be implicitly learning and mapping them as they find them useful in reducing the training loss.
**Answer**: It is true that in some problem settings, the underlying data distribution can be predominantly hierarchical or cyclical. However, we would like to clarify that we did not make any assumption on the underlying data structure. And in most real-world applications, the underlying data structure is difficult to know. Moreover, our approach subsumes non-hybrid approaches. If we know that the data is predominantly hierarchical, we can set all the candidate space to be hyperbolic.
Pertaining to modern deep learning methods, although it is not in the scope of this paper, we can see from existing literature that non-Euclidean neural networks usually performs better than euclidean neural networks on many tasks [a], indicating that it is not trivial for modern deep learning methods to implicitly inferring the methods. Our method introduces an inductive bias that ensures an effective capture of the data-geometric space relationships, while existing deep learning methods can hardly achieve this.
##### Reference
- [a] Ganea, Octavian, Gary Bécigneul, and Thomas Hofmann. "Hyperbolic neural networks." Advances in neural information processing systems 31 (2018).
---
Rebuttal 2:
Title: Acknowledging the rebuttal
Comment: I thank the authors for the rebuttal. It addresses my concerns and I would like to keep my positive rating. | Summary: In many applications (especially those involving discrete data structures), choosing the right geometry for the embedding space, matching the structure of the data, can lead to significant performance gains. Extant approaches often make an ad hoc choice or use heuristics for the type of geometry applicable globally for the entire data. The paper proposes a novel strategy for the local selection of the product space with appropriate geometry for each data point, using a sparse gating mechanism. The approach is validated on the matrix completion for the movie ratings prediction problem, and link prediction on relational graphs (WordNet, Freebase KG) demonstrating significant performance gains.
Strengths: Strengths of the paper are listed below:
**Relevance** The paper addresses a problem – automatic, and local selection of a product of subspaces with appropriate geometry – of relevance to a wide audience including those working on knowledge graphs, and recommendation systems.
**Originality** While the components of the proposed approach, embedding in different geometric spaces, and sparsely gated MOEs, are not novel, I have not seen the utilization of the latter to make a local (per data-point) selection of the optimal product space. While I’m not deeply familiar with the literature in this area, I believe the proposed approach is novel. I rate the core technical novelty incremental.
**Technical Quality** The technical approach mostly appears sound apart from some doubts that I point to in the weaknesses section.
**Evaluation** The evaluation using the tasks of (a) matrix completion in the movie-ratings setting, and (b) link prediction in relational graphs (WordNet, and Freebase KG), and the demonstrated significant gains in performance (especially on link prediction in the Freebase KG over the SOTA) validate the utility of the proposed approach. This strength is weakened by points listed in the weaknesses section.
**Significance** The approach has the potential for significant impact over wide areas of research once the weaknesses have been addressed.
Weaknesses: Weaknesses of the paper are listed below:
**Technical Quality** I'm not convinced regarding the appropriateness of using the CNN layers $f_1(.)$ and $f_2(.)$. Since the vectors $e_p^{(i)}$ lie in different spaces, even if lifted to the corresponding tangent spaces, $f_j(.)$ layer would perform algebra on them and take weighted combinations of entities (embeddings) in entirely different spaces. To me this doesn’t seem proper.
**Evaluation**
- Matrix completion for movie-ratings can be considered a classical problem with a lot of algorithms benchmarked on MovieLens 1M. However, the paper doesn’t directly provide comparisons with the state of the art (bar chart in Figure 3). Secondly, it doesn’t provide a table with quantitative results, enabling a better understanding of the exact performance gains achieved.
- Figure 6 shows that while the performance may be more robust to the choice of N, K has a significant impact. In the spirit of local selection of the relevant product spaces, it stands to reason that optimal K may also vary from point to point. Was this aspect investigated?
**Clarity**
- (l. 111) What is a ‘Rheinmain Geometric Space’? If a typo, kindly fix, else provide a definition with a reference to the literature.
- (l. 201-208; (12)-(15)) Kindly add explanations for how the two regularizers, $l_1$ and $l_2$, achieve load balancing. Point to the relevant literature where they were introduced or mention that they are novel introductions.
- (Fig. 6) The value of K for the left panel, and N for the right panel are not identified.
- some typos need to be fixed (not considered in evaluating the paper).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Kindly address the weaknesses pointed in the critique above. I summarize them below:
- appropriateness of algebra involving embeddings in different spaces.
- quantitative comparison with SOTA on the movie-ratings problem
- local selection of K.
- Clarify with references (a) Rheinmain geometric space, (b) how the regularizers achieve load balancing.
- Discuss limitations of the work (see below)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Authors don’t address limitations of the work.
I don’t think there are any direct negative societal implications. Other limitations and opportunities for improvement are addressed in my responses to previous questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for confirming the novelty as well as the potential of the proposed approach. Also, the suggestions can certainly help us improve the manuscript.
Below we answer the questions and make some clarifications.
##### **Why do we use CNN layers (appropriateness of algebra involving embeddings in different spaces)?**
All the vectors are initialized in the Euclidean space then mapped to corresponding geometric spaces, the CNN layers are applied over the initialized euclidean vectors regardless of which geometric space these vectors will be mapped to for simplicity. We agree that it might be possible to use hyperbolic/spherical neural networks in the process. However, in this case, it requires each geometric space to have its own selection network, and we still need to combine the outputs of all the selection networks to arrive at a single decision. We will leave it as a future work to explore more advanced selection mechanisms.
##### **Comparison with state of the art on Movielens**
We provide comparisons with other stoa methods on ML-1M below. We can see that, compared with the most recent models, the performance gain is still significant.
Our method: Recall@10=0.116;
MultiVAE (WWW 2018 [a]), an variational autoencoder based model; Recall@10=0.101;
LightGCN (SIGIR 2020 [b]), a graph convolutional network based model: Recall@10=0.0987;
DiffRec (SIGIR 2023, first available on April 2023 [c[), a diffusion recommender model; Recall@10=0.106;
##### **local selection of K**
In our current design, K is selected for the whole dataset instead of each data point. It is reasonable that each data point may require a different K, but it is quite challenging to achieve this goal. One potential solution is to build an MOE network with different K for each data point, but this can be unrealistic as the model size increases linearly to the data size.
Moreover, K is usually set to a small value in our experiments (select from 2, 3, 4), as such, manual hyper-parameter search is good in this case.
##### **Clarification**
- (1) Rheinmain geometric space ⇒ it is a typo, and it should be Riemannian geometric space.
- (2) The loading balance approach is not a novel contribution, but from reference ***. We will add these references [d][e] to the paper.
The two regularization losses can be viewed as a soft constraints on the space selector.
- In the first regularization loss (eq-12), the batch-wise sum of the gate values are considered as the importance of an expert. Minimizing the coefficient of variation the importance distribution encourages all experts to have equal importance.
- The second loss (eq-13) is used to encourage all experts to receive roughly equal numbers of training examples. Although one can ensure equal expert importance with the first loss, the number of training examples received by each expert can be different (e.g., one expert receives a few examples with large weights, and the other receives many examples with small weights). Here, $\kappa$ can be viewed as a soft estimator of the number of examples assigned to each expert for a batch of inputs. Minimizing the coefficient of variation this distribution will help encourage an equal number of training examples for each expert.
##### **Discuss limitations of the work**
We have a short subsection in 3.4 to discuss the limitations of the proposed method. Based on the review, we also identified one more limitations: the proposed approach can not assign different K for each data point.
##### **Reference**
- [a] Liang, Dawen, et al. "Variational autoencoders for collaborative filtering." Proceedings of the 2018 world wide web conference. 2018.
- [b] He, Xiangnan, et al. "Lightgcn: Simplifying and powering graph convolution network for recommendation." Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 2020.
- [c] Wang, Wenjie, et al. "Diffusion Recommender Model." SIGIR (2023).
- [d] Shazeer, Noam, et al. "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer." arXiv preprint arXiv:1701.06538 (2017).
- [e] Bengio, Emmanuel, et al. "Conditional computation in neural networks for faster models." arXiv preprint arXiv:1511.06297 (2015).
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: Thanks for the response. I don't have any further questions. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their suggestions and we answer corresponding questions under each review's rebuttal. The pdf contains some figures to show the stereographic projection models for hyperbolic & spherical spaces. Mobius sum is also demonstrated in this pdf.
Pdf: /pdf/07cf220f7b4ae6cb8067cab20744c5ce3d11fcfc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Permutation Equivariant Neural Functionals | Accept (poster) | Summary: The paper introduces and evaluates permutation equivariant neural functional networks (NFNs).
Neural functionals are models which take weights of other neural networks, or more general weight-space features, like gradients or sparsity masks, as inputs.
Their permutation equivariance addresses the issue that any particular vector of weight-space features is just an arbitrary representative of an equivalence class of weight-space features that correspond to exactly the same network.
Specifically, the weights and biases in all hidden layers may by permuted arbitrarily, which corresponds to simultaneous permutations of rows and columns of the previous and following weight matrix, respectively.
Permutation equivariant NFNs ensure predictions which are independent from the particular permutation.
As usual for equivariant networks, this reduces the number of model parameters (of the NFN) significantly, and makes learning feasible in the first place.
The authors consider two specific group actions, which are the first order permutation actions and trivial actions on invariant scalars.
Equivariance refers throughout the paper to mappings that commute with the first order action in the input and output, while invariance refers to mappings from first order actions to scalars.
Equation 3 presents a linear NFN layer, which computes features as usual by taking learned linear combinations of weight-space features that are summed over different combinations of rows and columns.
Proposition 1 claims that this layer is 1) sufficient and 2) necessary for equivariance, i.e. spans the space of the most general linear equivariant maps (I have some concerns here, discussed in the weakness section).
As nonlinearities, NFNs use pointwise nonlinearities, which commute with permutation actions.
A straightforward extension to permutations of CNN channels is discussed towards the end of section 2.2.
Besides the HNP model, which considers the "correct" hidden neuron permutation symmetry,
an NP model, which additionally permutes input and output neurons and has a further reduced number of parameters, is introduced.
In order for the network to be able to break this excess equivariance, positional encodings are added.
Extensive experiments evaluate HPN-NFNs and NP-NFNs and compare them to baseline models.
The first two applications are permutation invariant predictions of CNN test accuracies from their weights and invariant classifications of implicit neural representations.
In both cases the NFNs perform significantly better than baselines, and the HNP variant wins over the NP task in the first but loses in the second one (they are still quite close).
The other two applications are permutation equivariant predictions of a lottery ticket hypothesis winning ticket sparsity mask and the editing of implicit neural representations.
In the former, NP-NFN performs close to a task-specific baseline, while HNP-NFN is prohibitively expensive, thus supporting the need for the NP-NFN model.
In the latter, both NP-NFN and HNP-NFN outperform their baselines.
Strengths: The paper is well motivated and very clearly written - it was really a joy to read!
After introducing the mathematical framework, the authors derive the most general linear equivariant mappings for the considered group actions (I have some concerns here, see the weakness section).
Explicit equations for the required number of parameters of the different models are given.
The experiments are very extensive and show the utility and superiority of NFNs in various applications.
Weaknesses: I have two technical concerns regarding the claims about the generality of the proposed linear equivariant map in proposition 1.
Firstly, it should be mentioned that the proof assumes a specific choice of group action w.r.t. which the layer is equivariant, namely first order permutation actions.
Permutation equivariant networks have also been built for other actions, like irreducible representations or higher order tensor product actions - these would lead to other layers.
This point needs to be discussed and the statement of proposition 1 needs to be made precise, e.g. by saying that the map $T$ is equivariant w.r.t. the specific first order action $\sigma$ in the input and output.
[edit: this claim was wrong]
~Secondly, I believe that the mapping in equation 3 is only then the most general $S_{n_1}\times\dots\times S_{n_{L-1}}$ equivariant intertwiner when $n_1\neq\dots\neq n_{L-1}$ is assumed.
If, however, $n_i=n_j$ for some layers $i\neq j$, the same permutation group $S_{n_i} = S_{n_j}$ acts on these neurons, and it should be possible to linearly accumulate the corresponding weight-space features after $\star$-summing over the other indices.
A similar summing over features is already present in equation 3, namely the first term, which sums invariant scalars that are accumulated from all layers.
Note that this contribution can in practice become very large, since networks are in practice often constructed such that they have the same number of hidden neurons or channels throughout many different layers.
The more general equivariant mappings should be benchmarked against the current NFNs in the experimental section.~
The sufficiency and necessity of the invariant NFN layers in section 2.3. is not supported by a similar theorem as proposition 1.
Due to these weaknesses I decided to only give a weak accept to the current state.
However, the paper has a huge potential and I would be happy to switch to a strong accept when the issues are addressed.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The paper is very clearly written and I don't have any remaining questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Some limitations and potential future work are discussed in the conclusion.
It should be added that one could consider more general permutation group actions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and detailed analysis of the technical aspects of our work.
> It should be mentioned that the proof assumes a specific choice of group action w.r.t. which the layer is equivariant, namely first order permutation actions.
We will make it clear at the beginning of the proof that we are only concerned with equivariance to a particular choice of action (as defined in Equation 1).
> Secondly, I believe that the mapping in equation 3 is only then the most general $S\_{n\_1} \times \cdots \times S\_{n\_{L-1}}$ equivariant intertwiner when $n\_1 \neq \cdots \neq n\_{L-1}$ is assumed.
We don't follow how two layers having the same number of neurons creates a special case. To aid us in understanding and responding to this point better, could you write out an example of any additional or missing terms explicitly, which would show that Eq 3 is not fully general?
Additionally, would this claim not contradict Proposition 1? Though it is possible that we made a mistake and dropped terms somewhere in the derivation of the layer.
> The sufficiency and necessity of the invariant NFN layers in section 2.3. is not supported by a similar theorem as proposition 1.
We will add an analogous proposition as Prop 1, but for the sufficiency and necessity of the invariant layers in Section 2.3, using a similar strategy as in Appendix B.2.
---
Rebuttal Comment 1.1:
Comment: Dear Authors, thank you for your replies.
Instead of mentioning the group action in the proof, could you please mention it in the theorem? This should really be part of the statement itself. I would also strongly encourage a paragraph on other group actions, e.g. mentioning that they exist, might have a different performance, and citing related work.
My reasoning in the second weakness was indeed erroneous: I was thinking about an action of the _diagonal subgroup_ $S_N$ of $S_N \times S_N$ for two layers with $N$ neurons, which would allow to linearly combine their features. This is of course not the case, since the factors act independently.
Thanks for adding the proposition.
I didn't read the concurrent/prior work by Navon et al. My review was and is only concerning the content of the current submission, irrespective of whether there was a similar submission a few months earlier.
I am generally happy with the submission. If the authors address the first point in this reply I would update my rating to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response.
> Instead of mentioning the group action in the proof, could you please mention it in the theorem? This should really be part of the statement itself.
Understood. We will modify both Proposition (1) and the preliminaries to make this clear. We will modify L91-93 (introducing S-equivariance) as follows:
"We refer to $f$ as **S-equivariant** if $\sigma f(U) = f(\sigma U)$ for all $\sigma \in \mathcal{S}, U \in \mathcal{U}$. In this work, we exclusively focus on equivariance with respect to the action introduced above so, e.g., $\sigma U$ is defined according to Eq (1)."
We will then modify Proposition (1) as follows:
"The NF-Layer $H$ (Eq 3-4) is S-equivariant with respect to the action in Eq (1), applied to both the input and output spaces. [...]"
> I would also strongly encourage a paragraph on other group actions, e.g. mentioning that they exist, might have a different performance, and citing related work.
We will include this--for higher order actions we assume you are referring to the line of work in, e.g., [1,2]? We will also discuss the line of work approaching equivariant layer design by focusing on irreducible representations. We also welcome any references you think may be relevant and will include and discuss them.
[1] Thiede et al. The general theory of permutation equivarant neural networks and higher order graph variational encoders.
[2] Pan and Kondor. Permutation Equivariant Layers for Higher Order Interactions.
> I was thinking about an action of the _diagonal subgroup_ […]
Understood, thanks for clarifying. We will also modify the preliminaries to make it clear that the $\sigma_i$ are independent. | Summary: This paper proposes the NF-Layer that maps the weight space of a deep neural network (DNN), including MLP and CNN, to another weight space, possibly with a different number of channels. Neural Functional Networks (NFNs) are then constructed using the NL-Layers to process the weight space of a DNN. NF-Layer is designed to be S-equivariant, that is, if the input weights of a DNN are permutated so as not to change the output of the DNN, the output of the NF-Layer, which is also in another weight space, is also permutated in the same order. Such a property enjoys the equivariant characteristics inherent in the DNN weights and enhances the efficiency in modeling NFNs, like the convolution structure enhances the efficiency of modeling image inputs. Experimental results are presented to demonstrate the usefulness of NF-Layers in applications of NFNs, including accuracy prediction, classification of implicit neural representations (INRs), winning ticket prediction, and style-editing through INRs.
Strengths: * An interesting application of the equivariant architecture based on parameter-sharing.
* Introduction of the NP setting in the permutation definition. Although it requires a stronger assumption than the HNP setting, the resulting NF-Layer can be modeled more efficiently under the NP assumption.
* Experiments suggesting the advantage of the proposed NF-Layers in interesting applications of NFNs.
Weaknesses: * A network architecture with equivariance based on parameter-sharing is not original and has been proposed in [51], as the authors suggest.
* NF-Layers cannot directly handle some DNN models, including ResNet and Transformer.
(Minor)
* Figure 1 is difficult to understand at first glance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I can understand the permutation equivariant property in DNN weights, but I cannot intuitively understand why the prediction accuracy of NFNs improves with NF-Layers with S-equivariance. Is it because the parameter to be trained is reduced by the S-equivariant structure, while retaining its expressiveness?
It would be better to explain the multi-channel extension of permutations more carefully in ll.86-90. When I first read it, I mistakenly thought it was referring to the convolution channel. Also, it is better to describe that the proposed NF-Layer may change the channel size in this sense in the explanation of Figure 1.
In Figure 2, $W_{\ast,j}^{(i+1)}$ (with yellow c) -> $W_{j,\ast}^{(i)}$?
Equation in Figure 2 should be matched with Eq.(3).
In ll.155-160, I could not understand the motivation to use position embedding with NFN_{NP}. In which experiments with NFN_{NP}, position embeddings are used?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As described by the authors, there is a concurrent work by Navon et al.[45] that has a very similar motivation and methodology. But I think this paper has its original contributions that the authors mention and deserves to be published.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and interesting question.
> A network architecture with equivariance based on parameter-sharing is not original and has been proposed in [51]
To clarify, [51] (Equivariance through parameter sharing) provides general strategies for developing layers equivariant to a given choice of group action. [51] does not study the permutation symmetries of neural networks' weights and does not develop ways of processing weights. Rather, we utilize strategies from [51] to develop our NF-layers.
> NF-Layers cannot directly handle some DNN models, including ResNet and Transformer.
This is a good observation, architectures like ResNet have more complicated weight space permutation symmetries compared to feedforward networks so our definitions of the (hidden) neuron permutation groups do not directly apply. We believe that extending these layers to handle more general network topologies is an important (and nontrivial) direction of future work.
> cannot intuitively understand why the prediction accuracy of NFNs improves with NF-Layers with S-equivariance. Is it because the parameter to be trained is reduced by the S-equivariant structure, while retaining its expressiveness?
You are correct that enforcing S-equivariance reduces the parameter space, and we know that the target function must be S-equivariant (or invariant). So ideally enforcing equivariance does not sacrifice our ability to express the target function, while reducing the space of parameters to search over, leading to better generalization.
> It would be better to explain the multi-channel extension of permutations more carefully in ll.86-90. When I first read it, I mistakenly thought it was referring to the convolution channel.
Thanks for your suggestions–we will update our presentation of the NF-layers to more clearly explain that they can have arbitrary input and output channels, and avoid any confusion with channels in, e.g., the convolutional weight space being processed.
> Possible error in figure 2?
Thanks for pointing out this error, we will update the figure to fix this.
> In ll.155-160, I could not understand the motivation to use position embedding with NFN_{NP}. In which experiments with NFN_{NP}, position embeddings are used?
We use positional embeddings (PE) in all of our experiments. The reason is that our tasks (and most real world tasks involving weights) do not allow arbitrary permutations of the input and output layers of the networks being processed--only the hidden layers can be permuted. So the NP assumptions give an incorrect symmetry, which can be broken by PE.
By analogy, Transformers are equivariant to token permutations, but most sequence-to-sequence tasks are not actually permutation symmetric (order matters). So Transformers often use PE to break that symmetry.
We will update L155-160 to clarify this point.
## References
[1] Cohen and Welling. Group Equivariant Convolutional Networks.
---
Rebuttal Comment 1.1:
Title: Thank you for your replies
Comment: Thank you for your replies. I understand the relationship between [51] and this paper. I would like to keep my score.
---
Reply to Comment 1.1.1:
Comment: Acknowledged, thank you for reading our response and for your prompt reply! | Summary: This paper studies the problem of defining linear layers (and by extension, neural networks) that operate on neural network weight spaces. The core idea of this work is to take into account weight permutation symmetries, similar to Navon et al., ICML’23. In particular, the weights of certain feedforward architectures, such as MLPs and CNNs, can be permuted in several ways, without altering the function that the neural net represents. Therefore, the authors seek to define functions on the weights, that will be equivariant (or invariant) to these very symmetries.
To achieve this, they characterise the space of linear equivariant/invariant functions, following the framework of Ravanbakhsh et al., ICML’17, i.e. by identifying the parameters of these functions that should be shared. The authors propose two variants, one that is derived from the detailed characterisation of the weight permutation symmetries that arise from hidden neurons ($\text{NFN}_{\text{HNP}}$), and another one that assumes extra permutation symmetries in the input/output layer (that can be broken with positional encodings), and is, therefore, more parameter efficient.
Interestingly, these results easily carry over from MLPs to CNNs. The proposed layers are experimentally tested in a battery of tasks, such as predicting NN generalisation or sparsity masks for NN pruning, showing considerable improvement against baselines that do not take into account permutation symmetries, in addition to a heavily reduced parameter count.
Strengths: **Significance**: The proposed method has wide applicability, as correctly pointed out by the authors in the first paragraph of the introduction, and can potentially have a substantial impact on multiple neural-network-related problems (meta-learning, neural network editing, INR processing etc.).
**Presentation, clarity and reproducibility**: Although the concepts studied in this paper and the neural network architectures resulting from the study of the relevant symmetries, have a fair amount of complexity and may require a good understanding of the study of symmetries, the authors have made a very good effort to make their paper as accessible as possible. In particular, the illustrations, the nice summarisation of the core layer in Eq. (3) as well as the parameter counts in Table (1) and the accompanying explanations provide a good overview of the method that I believe will be appreciated by future readers. Moreover, the authors have formulated their layers in such a way that makes them easy to implement, which is also apparent from the pseudocode provided in the appendix.
**Experimental evidence**. The authors have tested their layers against a diverse set of problems and the reported results clearly motivate the need for equivariant/invariant layers and support all the design choices made.
**Extended scope**: A few months ago, another work by Navon et al., ICML’23 characterised the same family of equivariant/invariant layers as well (providing additional theoretical support). This is obviously not a problem, since the two works were developed in parallel. It is therefore nice to see that the present paper has a bonus, i.e. an extended scope compared to the aforementioned one since (1) the formulation seamlessly allows to extend the method to CNNs, and (2) the NP formulation allows for more practical implementation, a significant reduction in the number of parameters, and empirically strong performance. (3) This is probably subjective and a matter of taste, but I appreciated the alternative way to derive the layers (see sec B.3. in the appendix), using the parameter-sharing framework of Ravanbakhsh et al., ICML’17. I found it easier to follow and slightly more intuitive, which might be of independent technical interest in the future. Therefore, although the layers have been rediscovered, I feel that the contributions are still valuable.
Weaknesses: **Clarity of the proofs (appendix)**: I found the explanations in section B.4. a bit hard to follow. Personally, I am familiar with the work of Navon et al., so I could easily understand the concepts, but I fear a reader not versed in the topic, might get lost in this part. Since it is crucial in order to gain a deep understanding of the paper, I would recommend that the authors try to simplify some concepts (e.g. by giving more details for a particular sub-case).
**Related work**: I believe that the authors do not give enough credit to the work by Navon et al.. As I mentioned before, the two works are concurrent, and they apparently independently discovered the same core idea, but I think that the authors should be more upfront about this and mention this early on in the paper and cite Navon et al. more prominently.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: **Experiments, baselines and implementation details**: Most of the following questions ask for clarifications and I do not consider them as weaknesses. However, I think a discussion is needed and it would be useful to include the answers to the below in the paper. In detail:
- **NP case**. What are the positional encodings that are used in the experimental section? If I understand correctly they are fixed (handcrafted). Have the authors tried learnable ones instead? What is the number of added parameters when using learnable positional encodings? How do the authors explain the fact that using positional encodings in NFN_NP performs better than NFN_HNP?
- **NFN_PT case**. Since this method performs well in the pruning mask prediction experiment, why didn’t the authors also include its performance in the rest of the experiments (e.g. generalisation and INR classification)?
- **Pruning**: Why did the authors use a generative model (CVAE) for the winning ticket experiment? Why not a deterministic predictor (a 0-1 classifier)? Have the authors tested this? How far are the generated sparsity masks from the ones that IMP yields? Why isn’t the performance of MLP and MLP_aug included in this table?
- In section 3.2, would it be possible for completeness to add some baselines that classify images and 3D shapes, using different representations instead of INRs, e.g. state-of-the-art results using image-based (CNNs) or point cloud-based classifiers? Could the authors comment on why the performance is still far from the one that the community has obtained by applying these baselines? This can be clearly observed in CIFAR-10. In addition, apart from the inr2vec baseline, it might be helpful to add the baseline from Dupont et al., ICML’22 (the “functa” representation), where INRs are represented as vectors of latent modulations.
- **NN editing**. In section 3.4., regarding the CIFAR dataset, the differences between the competing methods are not clear qualitatively. Could the authors comment on this?
- Is there any theoretically-backed advantage of using a combination of equivariant layers followed by an invariant one instead of using only invariant ones (when the ground-truth function is invariant)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The authors have dedicated a paragraph to the limitations of their method. One thing that might be missing is a clarification (if I understand correctly) that NF layers can only be applied to the weights of neural nets that share the same architecture (fixed number of layers, fixed kernel size for CNNs). Note that L189-190 (“…which contains thousands of CNN weights trained on several datasets with varied hyperparameters”) seems contradictory to this. I think the phrase "varied hyperparameters" should be clarified to avoid confusion.
- Moreover, given that classifying INRs with NF-layers seems to still have room for improvement in order to reach the performance achieved by traditional neural nets that operate on raw representations, it might be useful to discuss the possible reasons for this more thoroughly (perhaps stronger inductive biases, taking into account other properties of the geometry of weight spaces, might be missing?)
- No foreseeable negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed and insightful review, and agree that indepedently developed frameworks can be useful contributions to the community.
> I found the explanations in section B.4. a bit hard to follow.
We will aim to improve the exposition in Section B.4. We also welcome any feedback on particular aspects that are confusing or could be improved.
> I think that the authors should be more upfront about [Navon et al] and mention this early on [...]
We will update the introduction to more clearly and prominently discuss DWSNets, and our contributions relative to that work. See our top level reply for more detail.
> Have the authors tried learnable [positional encodings] instead? What is the number of added parameters when using learnable positional encodings?
We will update the appendix to be clearer about this: the positional encodings (PE) are sinusoidal--learnable ones are also possible, though in Transformers it was observed that learned vs sinusoidal PE had essentially no impact on performance [1, Table 3 Row E].
For learned PE you would need a separate vector of length $c$ (feature channels) This would add $c(n_0 + n_L)$ parameters.
> How do the authors explain the fact that using positional encodings in NFN_NP performs better than NFN_HNP?
Note the effect is not consistent--the HNP variant performs better in some tasks, such as predicting generalization (Table 2). NFN_NP may perform better on some settings simply because it is more parameter-efficient and easier to train, or other optimization related details. Also, the PE ablation in our top-level reply suggests that differences in performance are largely due to differences in the HNP and NP architectures themselves, rather than PE.
> NFN_PT case. Since this method performs well in the pruning mask prediction experiment, why didn’t the authors also include its performance in the rest of the experiments
This is a good question. We generally didn’t include the pointwise (PT) ablation because, in our early development, PT architectures performed very poorly on all problems _except_ for pruning. Since PT is much more restricted than the full (H)NP layers, poor performance on most settings was expected and we focused our attention on ablations in the setting where it did have signs of life (pruning).
> Pruning: Why did the authors use a generative model (CVAE) for the winning ticket experiment? Why not a deterministic predictor (a 0-1 classifier)? Have the authors tested this? How far are the generated sparsity masks from the ones that IMP yields? Why isn’t the performance of MLP and MLP_aug included in this table?
Even conditioned on a fixed initialization, winning tickets are _not_ deterministic due to the noise of SGD–one can quickly verify this experimentally. Therefore, it makes sense to learn a probabilistic model p(mask | init) rather than a deterministic mapping.
> In section 3.2, would it be possible for completeness to add some baselines that classify images and 3D shapes, using different representations instead of INRs, e.g. state-of-the-art results using image-based (CNNs) or point cloud-based classifiers?
We will add these (non weight-space) baselines in Section 3.2.
> Could the authors comment on why the performance is still far from the one that the community has obtained by applying these baselines? [...] it might be useful to discuss the possible reasons for this more thoroughly (perhaps stronger inductive biases, taking into account other properties of the geometry of weight spaces, might be missing?)
As suggested, we believe that this is primarily due to a difference in inductive biases. For example, it is common to use convolutional networks because of translation symmetry when solving CIFAR in image space. While NFNs are able to leverage weight space symmetries, it does _not_ have symmetry to translations of the CIFAR INRs. How to encode inductive biases related to the underlying natural signal, not just the weights, remains an interesting and challenging direction for future work. We will add this commentary to the paper.
> In addition, apart from the inr2vec baseline, it might be helpful to add the baseline from Dupont et al., ICML’22 (the “functa” representation), where INRs are represented as vectors of latent modulations.
Functa cannot be run on our INR datasets since it requires a special (meta-learning) training process to produce the INRs, while the INRs in our datasets are produced by vanilla training methods. We would also interpret functa as operating in a different setting depending on how much control one has over the process that produces the input weights.
> NN editing. In section 3.4., regarding the CIFAR dataset, the differences between the competing methods are not clear qualitatively. Could the authors comment on this?
For the CIFAR contrast task, it turns out that even the ground truth brightening operation can be relatively subtle to notice visually. Coupled with the fact that none of the methods are perfect at achieving the ground truth operation, this makes the qualitative differences difficult to observe (though some samples are more obvious than others, see the right column of 4). We will investigate to see if there is a better way to display the visual changes.
> NF layers can only be applied to the weights of neural nets that share the same architecture [...] L189-190 (“…which contains thousands of CNN weights trained on several datasets with varied hyperparameters”) seems contradictory to this.
The CNNs in that CNN Zoo share the same architecture but vary in learning rate and other optimization hyperparameters. We will clarify this wording to avoid confusion.
## References
[1] Vaswani et al. Attention is all you need.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Dear authors,
Thank you for your reply. Most of my concerns have been addressed. I would advise incorporating the explanations you give here in an updated version (e.g. prominently discuss and compare against Navon et al., details about positional encodings including the ablation study, adding non-weight space baselines in 3.2. and discussing the differences). I continue to support the acceptance of the paper and my initial score remains.
*Minor*: The only thing I am sceptical about is the absence of comparison to Dupont et al., ICML'22. The authors correctly mention that the INRs in this paper are produced by specialised training, but since with this method one obtains a vector representation of each INR, then this representation can be fed to, e.g., an MLP to solve all the *invariant* tasks tested in this paper. This is not a major concern, but it would be nice to include an experimental comparison (missing from Navon et al. as well). I would also encourage the authors to discuss my last question in an updated version (*“Is there any theoretically-backed advantage of using a combination of equivariant layers followed by an invariant one instead of using only invariant ones (when the ground-truth function is invariant)?”*) – this was not answered in the rebuttal. | Summary: The authors proposed an architecture for processing other networks’ weights and implicit neural representations (INRs). The generalization abilities of this architecture are enhanced and the number of parameters is reduced by leveraging the symmetries of deep networks.
The authors claim the following contributions:
- Proposing a new architecture named Neural Functional Networks (NFN) for processing weights of other networks including INRs. A key building block in NFNs are NFLayers which the authors constrained to be permutation equivariant.
- Extensive experimental part that demonstrates the superiority of NFNs over various baselines.
Strengths: - The paper is dealing with novel and interesting problems of processing networks’ weights and INRs, I believe that this intriguing domain can open the path to more interesting and valuable works for our community.
- The paper is well-written and easy to follow. Specifically, the visualizations make the reading more accessible and straightforward.
- The experimental part is extensive and includes multiple learning setups and datasets.
Weaknesses: - My main concern is about the novelty of this work w.r.t [1]. In [1] the authors also proposed an architecture, coined DWSNets, for processing weights and INRs. They also provide some theoretical guarantees about the expressive power of their method. It is worth mentioning that [1] is **not a concurrent work** according to the NeurIPS guidelines. Therefore, the authors should explain what is the novelty of the current work over [1].
- Given the previous point, the authors should also include [1] as a baseline in the experimental part and cite it more explicitly in the paper.
- There is a lack of available technical information regarding the process of generating INRs. Specifically, in sections 3.2, 3.3, and 3.4 the authors did not mention the number of samples used for train/val/test. I also wonder how long it took to generate the INRs and how many resources were used. The authors should also include information about the training paradigm of INRs (for example the number of optimization steps) to enable easier reproducibility.
- Although performed in the weight space it will be interesting to see how well NFNs deal with more challenging style editing tasks like inpainting, deblurring, etc. Dilation and contrast editing are straightforward tasks in CV.
- Given that it was the pioneering study demonstrating the permutation equivariance of feed-forward networks, it is advisable for the authors to include a citation to [2] (especially in lines 33-36).
- In lines 155-158 the authors stated they have used positional encoding (PE) to boost NFNs performance (although it breaks the symmetry), what is the performance gain presented by using PE? I did not see such an ablation (although I might have missed it).
----------
Citations:
[1] Equivariant Architectures for Learning in Deep Weight Spaces, Navon et al.
[2] On the algebraic structure of feedforward network weight spaces, JHecht-Nielsen et al.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Can NFNs handle heterogeneous networks in a single dataset? For example networks with varying input dimensions and hidden features.
- What is the computational complexity of NFN / NFLayers?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: A limitation section is included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your suggestions and questions--we aim to clarify our contributions and strengthen the experiments with the suggested baselines.
> explain what is the novelty of the current work over [DWS].
We agree that DWSNets are a very relevant recent work with significant overlaps and notable differences. We will update the introduction to more clearly and prominently discuss DWSNets, and our contributions relative to that work--see the top-level reply for details.
We would like to emphasize some contributions of our work relative relative to DWS:
1. We additionally focus on the NP setting, which leads to layers that are more scalable and much more parameter-efficient (see Table 1) than HNP/DWS layers while maintaining good performance in most of our experiments. NP layers are also easier to visualize, understand, and implement, as evidenced by Figure 2 and the pseudocode in Appendix A.
1. We extend both HNP/DWS and NP layers to handle CNN weights, in addition to MLPs. This extension is not trivial as CNN filters have additional dimensions that are not permutable, so the symmetries of CNN weights and MLP weights are not the same.
With regards to technical foundations, as mentioned by Reviewer jZvx, independently developed approaches for deriving equivariant weight-processing layers can be a useful contribution to the community. DWSNets and our paper also contain many complementary experiments and benchmarks to demonstrate the applicability of these architectures.
> Given the previous point, the authors should also include [DWS] as a baseline in the experimental part
We have run DWSNets on our 2D-INR classification datasets and present the results in our top-level reply. DWSNets perform somewhat worse than both NFN variants across the board, even when controlling the number of parameters and after sweeping learning rates. We will also include results for this baseline on the other applicable benchmarks in our revised paper.
> The authors should also include information about the training paradigm of INRs (for example the number of optimization steps) to enable easier reproducibility.
Appendix D.3 contains additional information about the INR datasets, including the train/val/test sizes and the optimization process for generating each INR. We will add more explicit references to this information in Sections 3.2-3.4 of the main paper. Producing each INR takes ~168 seconds with 2 CPU cores (no GPU required), and can be done in parallel over multiple machines/cores. For the easiest reproducibility, we will also publicly release the INR datasets after the anonymous period.
> Although performed in the weight space it will be interesting to see how well NFNs deal with more challenging style editing tasks like inpainting, deblurring, etc.
As you note, although the style editing tasks are easy in image space the primary challenge is to be able to accomplish them in weight space, which is much more difficult. We agree that more challenging image editing tasks like inpainting and deblurring are a worthwhile goal, but ultimately become more of a test of geometric inductive biases (which convolutional networks have) rather than inductive biases related to weight space symmetries, which is the focus of this work.
> It is advisable for the authors to include a citation to [2] (especially in lines 33-36).
Our paper already cites [2] on Line 31, but we will also add a citation in lines 33-36.
> used positional encoding (PE) to boost NFNs performance (although it breaks the symmetry)
The input/output positional encoding (PE) for the NP variant of NFNs _only breaks symmetry to permutations of the input and output neurons of the weight space_, but preserves symmetry to hidden layer permutations. We apologize for the confusing wording and will update the writing.
> What is the performance gain presented by using PE?
We performed an ablation to answer this question--see the top-level reply for details. Results show that PE actually adds a very small (sometimes negligible) boost to NFN_NP performance. This suggests that the NFN_NP architecture by itself can solve many weight space tasks without needing to break permutation symmetry of the input and output neurons.
> Can NFNs handle heterogeneous networks in a single dataset?
In principle, parameter sharing makes our NP-equivariant NF-layer agnostic to the widths of the input weights, i.e. the number of neurons at each layer can vary. Similarly, the HNP-equivariant NF-layer is agnostic to the number of neurons at the hidden layers, but the input and output dimensions must stay fixed. The depth (number of layers) of the input weights must be fixed in both cases.
In practice, handling heterogenous networks is challenging in modern ML frameworks since varying input sizes makes batch computation difficult on GPUs, so in our experiments the input networks have a fixed size. In principle, padding could mitigate this problem, though it may use GPU memory inefficiently.
> What is the computational complexity of NFN / NFLayers?
Suppose the input weights have constant width $n=n_0=\cdots=n_L$, so that there are $Ln^2$ input weights in total. A naive linear layer operating on these weights would require $L^2n^4$ operations.
For the NP case, consider implementing Eq 3 without parallelization, by:
1. First calculate all quantities of the form $W\_{\star,\star}^{(i)}$, $W\_{j,\star}^{(i)}$, $W\_{\star,k}^{(i)}$. These require $O(Ln^2)$ operations.
1. Calculate each term in Eq (3) separately, for each output index $(i,j,k)$. The first term requires $L^2$ operations, terms 2-5 require $Ln$ operations, and the final term requires $Ln^2$ operations.
1. Add the terms to produce the final result for each $(i,j,k)$. This requires $O(Ln^2)$ operations.
So the layer can be implemented on $O(L^2 + Ln^2)$ operations. We will include this and the HNP result in the revised paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed and thoughtful comments and questions. Reviewer suggestions have helped us improve the writing and pointed us towards additional experiments that significantly strengthen the paper. A brief summary of changes and new experiments:
* We will update the introduction to more clearly and prominently discuss DWSNets, and our contributions relative to that work.
* We have run DWSNets on our benchmarks in order to provide a direct comparison.
* We performed ablation experiments on the effect of positional encoding (PE) on the NFN_NP architecture variant.
Additional changes and information are provided in the reviewer-specific replies.
## DWSNet discussion
In order to more clearly discuss DWSNets as a relevant work with significant overlap and notable differences, we will put the following discussion to the introduction:
> The recent work of Navon et al. [45] recognized the potential of leveraging weight space symmetries to build equivariant architectures on deep weight spaces; they characterize a weight-space layer which is mathematically equivalent to the equivariant NF-Layer we develop for the HNP setting. Their work additionally studies interesting universality properties of the resulting equivariant architectures, and demonstrates strong empirical results for a suite of tasks that require processing the weights of MLPs.
> Our independently developed framework additionally focuses on the NP setting, where making stronger symmetry assumptions enables us to develop equivariant layers with improved parameter efficiency and practical scalability. This work also extends both NFN variants to process convolutional neural networks (CNNs) as input, leading to applications such as predicting the generalization of CNN classifiers (Section 3.1).
## DWSNet Comparison
Here is an initial empirical evaluation of DWS on our INR classification benchmarks. We trained DWSNets both at width 32 (as in [1]) and width 512 (as in ours), with our data augmentation of creating multiple INR copies per image. We also tried [1]'s data augmentation scheme, but found that it tended to hurt performance. For each dataset and channel size we swept learning rates in $[1e-3, 5e-3, 1e-4, 5e-4]$. Initial results show somewhat lower test accuracies than NFNs, with comparable performance on CIFAR10. Although the DWS layers match our HNP variant in theory, real world performance depends heavily on many architectural hyperparameters, just as different CNN architectures can achieve different performance.
| Test accuracy | MNIST | FashionMNIST | CIFAR |
|----------------|-------|--------------|-------|
| DWS (32 channel)| 74.7 | 67.5 | 42.3 |
| DWS (512 channel)| 61.6 | 62.0 | 42.9 |
| NFN_NP (512 channel)| 92.9 | 75.6 | 46.6 |
| NFN_HNP (512 channel)| 92.5 | 72.7 | 44.1 |
And here is the number of parameters for each method, for reference:
| No. params | MNIST | FashionMNIST | CIFAR |
|----------------|-------|--------------|-------|
| DWS (32 channel)| 0.6M | 0.6M | 1M |
| DWS (512 channel)| 71M | 71M | 134M |
| NFN_NP (512 channel)| 45M | 45M | 47M |
| NFN_HNP (512 channel)| 69M | 69M | 135M |
## Positional Encoding (PE) ablation
We performed the PE ablation on both 2D INR classification and style editing tasks. The results show that PE actually adds a very small (sometimes negligible) boost to NFN_NP performance, though it never hurts. Since NFN_NP often performs as well as or better than NFN_HNP, this indicates that even the base NP variant can solve many weight space tasks without needing to break that symmetry.
**Table 3b**: Ablating the PE on the INR classification. Higher is better.
| | NFN_NP | NFN_NP (no PE) |
|-------------|----------------|----------------|
| CIFAR-10 | 46.6 ± 0.072 | 46.5 ± 0.160 |
| MNIST | 92.9 ± 0.218 | 92.9 ± 0.077 |
| FashionMNIST| 75.6 ± 1.07 | 73.4 ± 0.701 |
**Table 6b**: Ablating the PE on style editing tasks. Lower is better.
| | NFN_NP | NFN_NP (no PE) |
|----------------|--------------|----------------|
| Contrast (CIFAR-10) | 0.020 ± 0.000 | 0.020 ± 0.000 |
| Dilate (MNIST) | 0.068 ± 0.000 | 0.070 ± 0.001 |
## References
1. Navon et al. Equivariant Architectures for Learning in Deep Weight Spaces. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper considers the design of architectures whose inputs are the parameters of neural networks. They propose an equivariant weight-sharing scheme based on the permutational symmetries of neural networks: one can permute at least the internal neurons (the “HNP” case), and sometimes the input/output neurons as well (the “NP” case), of a network without changing the function represented by the end-to-end network. The parameter-sharing of the “NP” case is more advantageous but less often applicable, since the underlying problem must have additional symmetry structure to warrant input/output neuron permutational symmetries. To remedy this, the authors propose coupling the stronger “NP” case with positional encodings. They extend their framework to take as input convolutional layers, and test their proposed architecture on tasks including image classification from implicit neural representations, predicting sparsity masks, and weight-space editing. They compare to the use of MLPs with and without permutational augmentations, as well as problem-specific baselines, and obtain promising experimental results.
Strengths: Originality: Outside of concurrent work, the idea of using permutational symmetries to train directly on weight and bias inputs is original and makes sense. This permutation subgroup is distinct from other groups that have been studied in the past (such as $S_n$ and $S_n \times S_n$). Even beyond concurrent work, the idea of using positional encodings in conjunction with input/output permutational symmetries is a creative contribution.
Quality: The experiments are varied and test against a reasonable set of baselines.
Clarity: The paper is generally quite clearly written and explained.
Significance: This paper enables the application of weight-space symmetry techniques to CNNs. The problem of learning with neural network weight inputs is topical, and such methods are likely to enjoy practical usage.
Weaknesses: 1. The primary weakness of this paper is its novelty in relation to other work by Navon et al (ICML 2023), which also articulates an architecture equivariant to the permutational symmetries of weight space and provides more thorough results on its universality. The authors state this work was concurrent (which I will take at face value). If its novelty is judged in relation to this work, then the main contributions are the NP setting with positional encodings, the extension to CNNs, and the distinct set of experiments, which are extensions of the key idea. This very relevant work is currently only mentioned near the end of the paper, but I would think it is important to have a more in-depth discussion of how the two fit together and the novelty of this work, and for this discussion to appear earlier in the paper as part of its framing.
2. The paper only considers the permutational symmetries of neural networks, which are indeed perhaps the most general symmetries if one does not specify a certain nonlinearity. However, as noted in Godfrey et al 2022, a given nonlinearity may enjoy additional symmetries — for instance, ReLU enjoys a symmetry to positive scaling. This is not discussed in the paper, but would have been a more substantial contribution relative to Navon et al (2023). For instance, one straightforward way of incorporating this scaling symmetry could be to pick a positive scale based on the norm of the parameters of the first layer, and then use this scale to normalize the rest of the input weights and biases.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Is there some advantage to classifying implicit neural representations, instead of the pixels directly? It would be helpful to motivate these problems more in the main body of the text.
2. Can the authors comment on the expressivity of their proposed idea of using input encodings to adapt the NP setting to problems without input/output permutational symmetries? Is this universal, for example, or is there some loss of information from adding the positional encodings? (On a related note, how exactly are the positional encodings incorporated — are they literally added, or are they appended?)
3. Could you be more specific in designing networks for a particular nonlinearity with different symmetries (e.g. scaling for ReLU)? See e.g. “On the Symmetries of Deep Learning Models and their Internal Representations” (Godfrey et al 2022), as well as other references in Navon et al 2023.
4. How large are the models input to the neural functional network? Will all weight-space methods will fail for truly large weight space inputs, i.e. millions of parameters, even if you have very few learnable parameters thanks to symmetry?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There is not potential negative societal impact. The authors are upfront that the HNP case is less scalable than the NP case. One limitation not discussed is that the architecture described only takes into account permutational symmetries, and not other nonlinearity-dependent symmetries.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review, and for highlighting aspects of our contribution such as the NP setting and application to CNNs.
> it is important to have a more in-depth discussion of how [DWS (Navon et al) and NFN] fit together and the novelty of this work, and for this discussion to appear earlier in the paper as part of its framing.
We will update the introduction to more clearly and prominently discuss DWS, and our contributions relative to that work–see the top-level reply for more details.
> Is there some advantage to classifying implicit neural representations, instead of the pixels directly? It would be helpful to motivate these problems more in the main body of the text.
The results in Section 3.2 represent first steps towards better methods that operate directly on INRs, which we believe will eventually have multiple advantages over operating on discrete representations of data (pixels, point clouds, voxel grids, etc…):
1. INRs are continuous and decouple the memory cost of the representation from the actual spatial resolution. This is important as we eventually move towards more complex and high resolution 3D signals such as entire 3D scenes [1], where for example working with gridded representations becomes less tractable. We believe the 3D shape SDF experiments are a promising first result for that direction (Table 4).
1. Working with INRs directly opens the possibility for a single method that can elegantly classify different types of signals of different sizes and resolutions, or even different dimensions. Whereas, for example, changing the image resolution or size in pixel space can pose a problem for CNNs.
We will add this motivation to Section 3.2.
> Can the authors comment on the expressivity of their proposed idea of using input encodings to adapt the NP setting to problems without input/output permutational symmetries? Is this universal, for example, or is there some loss of information from adding the positional encodings?
This is an interesting question–we don’t develop any such theoretical results for the NP architectures with positional encodings in this work. One could likely develop a universality result through a process analogous to Thm 3 in [2], which shows that positional encodings remove the permutation equivariance constraint and allow Transformers to approximate any function under certain conditions.
> On a related note, how exactly are the positional encodings incorporated — are they literally added, or are they appended?)
In practice, we implement the positional encoding by concatenation (in the channel dimension), though adding the encoding should have a similar effect. For matrices other than the input/output, we simply append zeros to keep the channel dimension consistent.
> The paper only considers the permutational symmetries of neural networks [...] One straightforward way of incorporating [ReLU] scaling symmetry could be to pick a positive scale based on the norm of the parameters of the first layer, and then use this scale to normalize the rest of the input weights and biases.
It is true that NFNs only consider permutation symmetries, while (depending on the activation) scaling symmetries may also exist in the weight space. We will discuss this limitation in the paper.
Since NFNs are typically dealing with weights produced by an optimization process like SGD, some existing literature suggest that accounting for scale symmetry may be unnecessary in practice; to quote [3]
> SGD’s implicit regularization balances weight norms and, therefore, scale invariance does not seem to play an important role in understanding symmetries of solutions found by SGD.
This could explain why we see decent performance on some tasks already. The suggested idea seems very interesting, though we likely do not have time to try it out in the limited discussion timeframe.
> How large are the models input to the neural functional network? Will all weight-space methods fail for truly large weight space inputs, i.e. millions of parameters, even if you have very few learnable parameters thanks to symmetry?
The size of the inputs to the NFN depends on the task: for example in predicting winning tickets we have 3 layers and 128 neurons, while INR classification deals with 3-layer INRs having 32 hidden neurons each (see Appendix D for full details). It is possible that much larger weight spaces will be challenging to learn in for any NF-type architecture, though such weight spaces also pose practical problems due to increased memory and compute usage.
## References
[1] Mildenhall et al. Representing Scenes as Neural Radiance Fields for View Synthesis.
[2] Yun et al. Are Transformers universal approximators of sequence-to-sequence functions?
[3] Entezari et al. The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you to the authors for their response. They clarified the use of operating on INR representations, made a valid point about why scale symmetry may be superfluous, and generally answered my other questions. My only remaining concern retains to the novelty of this work relative to Navon et al. I think the experimental comparison and added paragraph will help on this front, but because the contributions relative to Navon et al are still fairly minor, I retain my original rating of weak accept. | null | null | null | null | null | null |
Are Diffusion Models Vision-And-Language Reasoners? | Accept (poster) | Summary: The paper introduces Diffusion-ITM, a new method that directly adapts diffusion-based models to image-text matching tasks without retraining. Additionally, the authors collected a new benchmark called Generative-Discriminative Evaluation Benchmark (GDBench), which includes seven complex vision-and-language tasks and bias evaluation. The results show that Diffusion-ITM performs competitively on multiple tasks and surpasses CLIP. This paper underscores the significance of jointly considering discriminative and generative models and provides a new benchmark for future work.
Strengths: * The paper presents a new method to adapt diffusion-based models for image-text matching tasks without retraining. This innovation has practical value to apply. The finding of the relative difference between the with and without text conditions (Fig. 3) is pretty interesting.
* The authors thoroughly evaluate the proposed method on various vision-and-language tasks, offering insights into its performance and potential biases.
* The introduction of new benchmark for image generation models over the discriminative tasks provides a useful tool for the research community and enables comparative analysis.
* This paper is well written and organized with clear logic to read.
Weaknesses: Weaknesses:
* The Stable Diffusion includes a pre-trained CLIP text encoder and has itself been pre-trained on numerous image-text pairs. Therefore, Stable Diffusion has sufficiently learned the joint distribution of images and texts. The new finding is interesting but not surprising.
* The performance of the proposed method on more challenging benchmarks, like Winogroud, is relatively poor, suggesting room for potential improvement.
* The paper lacks theoretical analysis—since diffusion models are generative, applying them to a discriminative task would benefit from more theoretical insight.
* This paper ignores time cost/efficiency analysis. The Stable Diffusion inference is slow. According to Eq. 5, how to assign the t in your method? How many steps are needed and why?
* Minor issues, such as missing punctuation after equations, formatting errors, and a redundant space at the start of line 141, need to be addressed.
Question:
* Can the Stable Diffusion method be extended to more challenging image-text tasks like Visual Question Answering (VQA)?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time and effort to provide their detailed feedback on our submission! We are happy to note all the positive comments from the reviewer including:
- “... finding of the relative difference between with and without text conditions (Fig. 3) is pretty interesting."
- "... provides a useful tool for the research community and enables comparative analysis."
- "well written and organized with clear logic to read"
Similarly, other reviewers noted that:
- “hard negative finetuning method is intuitive yet highly useful”
- “Extensive ablations and experiments on variants of HardNeg, comparison between CLIP, BLIP, and Diffusion ITM [...]”
- “paper studied bias [...] which should draw more attention to the community”
- “Tackling a challenging problem, i.e., efficient quantitative evaluation of image generation models, with a simple yet effective method [...]”
We believe the primary criticism can be attributed to our paper not clearly emphasizing a few key contributions. We hope to clarify these contributions and address the reviewer's concerns below.
> "[...] The new finding is interesting but not surprising."
We respectfully disagree with the reviewer that the result is unsurprising. In fact, **generative models typically have poor discriminative performance**; we point the reviewer to several prior works [4,5,6] that analyze this trade-off between generative and discriminative capabilities.
Our contribution has been to analyze the discriminative capability of a new class of generative models i.e. denoising diffusion models. Little work on this topic exists prior to our submission. We admit that considering their powerful capabilities, it may be anticipated that they might perform well on discriminative tasks. Nevertheless, we fail to see how an “interesting but unsurprising” result is considered a weakness, especially in light of our extensive experimental evaluation.
Going into the project we had two plausible hypotheses. 1) The generative objective might lead to deeper understanding of composition of visual scenes beyond the frozen text-encoder which has been shown to mostly work as a BOW model. Or 2): The generative objective might be focused on low-level details at the expense of leveraging the semantics from the text-encoder (i.e. unlearning semantics). **Both hypotheses needed to be empirically tested - hence our paper!**
> "The performance [...] on more challenging benchmarks like Winoground [...] suggests room for potential improvement."
We would like to emphasize that **progress on Winoground has been generally very slow** ([Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality](https://aclanthology.org/2022.emnlp-main.143)) despite several papers and research groups having tried to tackle it, and similarly with benchmarks like ARO. We believe it is unfair to see this as a weakness of our specific paper and not the nature of well-designed hard tasks.
> "[...] since diffusion models are generative, applying them to a discriminative task would benefit from more theoretical insight."
We provide theoretical insight in the paragraph on Eq. 4 and 5. These are specific justifications for our method and not about the general concept of making a generative model into a discriminative one. We plan to expand the Related Work subsection on "Repurposing Text-Conditioned Diffusion Models", i.e. discussing how several families of generative models are adapted differently to a discriminative setting . We already have an older draft where we cite work such as [1,2,3]. Is this what you had in mind?
> This paper ignores time cost/efficiency analysis. The Stable Diffusion inference is slow. According to Eq. 5, how to assign the t in your method? How many steps are needed and why?
We provide time cost/efficiency analysis in the Appendix (see Fig. 9). Based on feedback from other reviewers as well, this will be **moved more prominently to the main paper**.
Both concurrent works study time cost/efficiency analysis thoroughly, especially [Li et al. 2023], and our paper focused on additional contributions.
Regarding choosing how many timesteps t to sample, we specify in Section 5 "Experiments and Results" under Hyperparameters, that we choose sample_size=250 for the main experiments and sample_size=10 for further ablations (sampled uniformly from [0,1000]).
> Minor issues (punctuation, formatting) [...]
Thank you for pointing these out! We will fix these.
> Can the Stable Diffusion method be extended to [...] VQA?
There is no elegant way but we did consider it (see Limitations section).
Prior works have tried creating a "caption" by concatenating question+answer and then using old-school VQA evaluation (treating it as classification).
However this is **impractical, and we discard it for two reasons**:
First, our main goal was to evaluate SD off-the-shelf, or with minimal changes to its objective. If you treat SD as a backbone you can add all sorts of architectures on top. It is common to use the middle U-Net layer as the image-text representation, see concurrent work (Li et al., 2023).
Second, VQA comes with its own problems. While it is “the” standard VL task, we believe that more **recent, often ITM-based, diagnostic benchmarks are more suitable to target phenomena such as compositionality**.
We hope we have sufficiently answered all of your comments point by point, and are happy to engage further on more questions! Would you consider increasing your ratings given the clarifications?
(Due to character limit constraints we provide the 6 citations from the main text as a comment after the rebuttal deadline)
---
Rebuttal Comment 1.1:
Title: Added Citations
Comment: [1]: Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models (Grover et al., AAAI, 2018)
[2] Pixel recurrent neural networks (Van Den Oord et al., ICML 2016)
[3]: [Diffusion Models as Masked Autoencoders](https://arxiv.org/abs/2304.03283) (Wei et al, Arxiv 2023)
[4] On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. (Ng and Jordan, NIPS 2001)
[5] On the generative-discriminative tradeoff approach: Interpretation, asymptotic efficiency and classification performance. (Xue and Titterington, Computational Statistics & Data Analysis, 2010)
[6] Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification. (Ardizzone et al., NeurIPS 2020) | Summary: Recently diffusion-based text-to-image generation models have evolved rapidly, but it’s still challenging to evaluate them quantitatively in an efficient way.
This paper smartly converts evaluations of Stable Diffusion based image generation tasks as simpler image-text matching tasks (e.g. image-text retrieval), and proposes an image-text benchmark (GDBench) including 8 carefully-chosen tasks to measure generative model’s fine-grained properties (e.g. compositionality) and fairness. CLIP and Stable Diffusion models are evaluated on this new benchmark.
Overall, this paper bridges discriminative and generative model evaluations, which could be inspiring to the image generation community and improve the quantitative evaluations of generative models.
Strengths: Tackling a challenging problem, i.e., efficient quantitative evaluation of image generation models, with a simple yet effective method (predicting the noise of the diffusion for image-text alignment).
Proposing an image-text benchmark (GDBench), covering diverse dimensions for measuring image generation (e.g. semantics, compositionality, fairness), which is beneficial for the image generation community.
Well written and easy to read.
Weaknesses: The biggest concern on my side is the lack of enough evidence to show such a ITM eval aligns well with SD model quality, which IMO can’t be strongly supported by the evals to compare CLIP and SD models on retrieval numbers. If possible, I would suggest at least to prepare a few SD models with different capability (e.g. trained with different numbers of steps), and show they have different numbers on GDBench; bonus: give some qualitative examples for these models conditioned on the same text.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: As in “Weaknesses”: how to prove the proposed method can measure the capability of SD models?
To measure discriminative vision-language models, both classification and retrieval tasks are usually used. Besides “pets”, could we also add some other common 0-shot classification tasks as well? E.g., INET (most common) or ObjectNet (to measure robustness).
minor question/suggestion: this paper’s citation seems to use a different format (e.g. “[name year]”) than others (e.g. “[number]”)? The latter is more convenient to read, especially for prints.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 4 excellent
Limitations: This paper has addressed model bias in one of GDBench tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your kind and insightful comments! We are thrilled by your comment that the paper is "Well written and easy to read".
We are glad you recognized that we are **"tackling a challenging problem, i.e., efficient quantitative evaluation of image generation models"**, with "diverse dimensions [...] ( e.g. semantics, compositionality, fairness)".
Similarly, other reviewers noted that:
- "[...] This innovation has practical value to apply. The finding of the relative difference between the with and without text conditions (Fig. 3) is pretty interesting."
- "new benchmark for image generation models over the discriminative tasks provides a useful tool for the research community and enables comparative analysis."
- “hard negative finetuning method in particular is intuitive yet highly useful”
- “Extensive ablations and experiments on variants of HardNeg, comparison between CLIP, BLIP, and Diffusion ITM, and the relationship between the number of timesteps and the performance”
- “Paper studied bias in state-of-the-models, which should draw more attention to the community”
We address concerns and suggestions below, highlighting the contributions of our paper and describing additional experiments we have conducted.
> "[...] prepare a few SD models with different capability (e.g. trained with different numbers of steps), and show they have different numbers on GDBench; bonus: give some qualitative examples for these models conditioned on the same text"
We liked your main suggestion and have tested one more model, specifically the recently introduced [Stable Diffusion XL](https://arxiv.org/pdf/2307.01952.pdf). SD-XL comes with new hyperparameters and pre-processing steps and we are therefore in the process of confirming our results and some interesting findings. This is why we decided to not present rushed preliminary numbers here in the PDF.
We also plan to include one more model for camera ready, i.e. SD-XL 0.9 or DeepFloyd (with a T5 text encoder), and as you suggested, are currently conducting more analysis similar to the DrawBench study already in the appendix. Note that these models were released after our main submission. Regarding “qualitative examples”, we provided additional analysis of HardNeg vs NoNeg setups in the new PDF (see global response).
> “add some other common 0-shot classification tasks as well? E.g., INET (most common) or ObjectNet (to measure robustness).”
Our goal and **contribution is primarily to study more complex vision-and-language reasoning** which is why we did not include ImageNet containing simple single-object images. These tasks were also studied in detail in related work like (Li 2023)[https://arxiv.org/abs/2303.16203]) and to our knowledge most recent models of vision-and-language also focus less on object recognition.
That being said, GDBench doesn’t have to be static and ObjectNet looks like a good fit to test deeper understanding! For camera ready, we might either replace Pets with it or add it.
Regarding citation format, thanks for the suggestion, we will revise the format in the final version.
Thank you again for your helpful review! We believe we have addressed all of your concerns point by point. Given this, would you consider increasing your rating of our paper?
---
Rebuttal Comment 1.1:
Comment: > Our goal and contribution is primarily to study more complex vision-and-language reasoning
Makes sense. There's one more dataset for VL reasoning which isn't mentioned in this paper: Visual Spatial Reasoning. Added there in case it's helpful.
Thank the authors for the detailed explanation. I would like to raise my rating to 6 and look forward to seeing more SD models in the camera-ready version.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and trust in our paper!
We also appreciate the pointer to VSR, it is definitely exactly the kind of task we were interested in. We are considering adding it but the only issue for now is that the task is not directly cast as image-text-matching. The authors mention that they do it for zero-shot CLIP via negating the sentence but we are not sure if this is the right approach since it tests for negation understanding at the same time. In any case, we will investigate this further and are grateful for the pointer! | Summary: This paper studies how to use a pre-trained text-to-image generative diffusion model to do discriminative tasks like image and text matching. Building upon previous works [Li et al. 2023, Clark and Jaini 2023], this paper introduces two technical contributions: 1) Unconditional normalization largely improves image retrieval; 2) Tuning the diffusion model with hard negatives on better-quality data like MSCOCO. The paper also introduces a benchmark to evaluate different diffusion models by image and text matching performance, and specifically studies the biases of two versions of Stable Diffusion model.
Strengths: 1. The two technical contributions (unconditional normalization & tuning on MS COCO) are effective and well ablated.
2. The introduced benchmark could be useful for the community to evaluate the prompt-following ability and biases new diffusion models.
3. The paper studied bias in state-of-the-models, which should draw more attention to the community.
Weaknesses: 1. The generative-for-discriminative methods typically take many feedforwards of the diffusion model to evaluate a single sample. Although the paper briefly mentions its slow runtime at the end of the Appendix, I think the computational cost is an important factor of the proposed algorithm and should be rigorously presented and studied in the main text, especially when the paper proposes a benchmark - A clear study on the computational cost would be important to others who plan to evaluate on this benchmark.
2. The GDBench covers several hard image-and-text matching datasets. However, I think results on other standard discriminative datasets would be helpful for the audience to understand the advantage of the proposed algorithm and compare to previous works. For example, how does HardNeg-DiffusionITM work on ImageNet compare to Diffusion Classifier [Li et al. 2023] / DiffusionITM (Ours) in Table 2(b)? I understand ImageNet is a 1000-way classification and would be a long time to run for this kind of methods (therefore studying runtime is important as mentioned above), but Diffusion Classifier [Li et al. 2023] has already reported their performance on ImageNet and an apple-to-apple comparison would be very helpful. Same applied to the CLEVR dataset. Another question is how tuning on MSCOCO works for image-text matching on MSCOCO?
3. I am not sure what is the use case to introduce MSCOCO tuning into Diffusion Classifier. It indeed improves the performance of image and text matching compare to the plain one, but all these methods are too slow (or not good enough) to be practically applied. On the other hand, tuning the weights of diffusion models prevents it from serving as a tool for evaluating diffusion models. Then in what case do we need MSCOCO tuning?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I am not sure how much MSCOCO HardNeg is clearly better than MSCOCO NoNeg - To me, MSCOCO NoNeg seems to be even better than HardNeg quantitatively, or at least on par with in Table 2. How do the authors land with HardNeg as the default setting?
2. Why is Winground's random accuracy (25%) higher than the algorithms' accuracy in Table 1(a)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed comments and effort, showing that you engaged with our work.
We are glad you found our “two technical contributions (unconditional normalization & tuning on MS COCO) effective and well ablated" and that our study contributes to drawing attention to bias.
Similarly, other reviewers noted that:
- “Tackling a challenging problem, i.e., efficient quantitative evaluation of image generation, with a simple yet effective method [...]”
- "well written and organized with clear logic to read"
- “hard negative finetuning method in particular is intuitive yet highly useful”
- “Extensive ablations and experiments on variants of HardNeg, comparison between CLIP, BLIP, and Diffusion ITM …”
We believe the primary criticism can be attributed to our paper not clearly emphasizing a few key contributions. We hope to clarify these contributions and address the reviewer's concerns below.
> [...] I think the computational cost [...] should be rigorously presented and studied in the main text [...] - important to others who plan to evaluate on this benchmark.
The computational cost has been mentioned in the Appendix. Given that other reviewers were also looking for it, we will move the discussion of this important subject from the Appendix to the main paper and expand the analysis, as well as cite findings from other papers more prominently:
Both concurrent works study time cost/efficiency analysis thoroughly, especially [1], and our paper focused on additional contributions, benefiting from their time cost findings.
Specifically, we will a) conduct the same analysis of Fig. 9 for more than just 1 dataset and b) have implemented speed-up suggestions from R1 over the last week (with small improvements of 10-20%).
> [...] I think results on other standard discriminative datasets would be helpful for the audience to understand the advantage of the proposed algorithm and compare to previous works [...] Diffusion Classifier has already reported their performance on ImageNet and an apple-to-apple comparison would be very helpful.
Our **goal and contribution is primarily to study more complex vision-and-language reasoning** which is why we did not include ImageNet containing simple single-object images. These vision-focused tasks were already studied in detail in related work like [1] and to our knowledge most recent models of vision-and-language also focus less on object recognition.
Fair comparison to previous work is important, which is why we include CLEVR (a complex reasoning task) and put in effort to generate images+text as close to [2] as possible, since [2] did not publish their dataset. Can you clarify what you mean with “"same applied to the CLEVR dataset"?
However since a) our focus is VL reasoning and b) our zero-shot text retrieval setting should be identical to DiffusionClassifier (Li et al., 2023) for the ImageNet case we do not see a strong need to add ImageNet. We do see value in ObjectNet as proposed by R4 and will consider it for camera ready!
> Another question is how tuning on MSCOCO works for image-text matching on MSCOCO?
If we understand the question correctly: We finetuned on MSCOCO train set so we can still test on the validation set. However the transfer setting can indeed not be studied. But our intuition is that Flickr30K covers a very similar task and hence it is not a problem for GDBench.
> I am not sure what is the use case to introduce MSCOCO tuning into Diffusion Classifier.
Regarding “all these methods are too slow (or not good enough) to be practically applied”): a) it is possible diffusion or DiffusionITM become faster soon (i.e. one might get good performance with less samples) , and b) it is **common practice to have a fast retriever to narrow down the best results** (i.e. top-10) and next apply a slower sophisticated retriever for the final selection (see [Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers](https://arxiv.org/abs/2103.16553)). Moreover, our work introduces a **generally new idea for incorporating negative pairs into generative pre-training** - which we believe is independent of applying it as ITM or evaluation of off-the-shelf models.
> [...] how much MSCOCO HardNeg is clearly better than MSCOCO NoNeg
That is a valid point and we authors discussed this during the submission. HardNeg is better for all image retrieval tasks (except Winoground which is in both cases below random and a tiny dataset). On the other hand, for text retrieval, NoNeg is better except for Pets. We see promise with negatives, especially hard image negatives for image retrieval.
We investigated further regarding generative performance (specifically image-text-alignment) and conducted the same study as with DrawBench zeroshot vs. HardNeg in Appendix B, but this time on HardNeg vs. NoNeg. We found that **HardNeg has higher alignment, winning 50 of the comparisons while NoNeg only won 38** (see Rebuttal PDF). Hence, we chose to say that MSCOCO HardNeg is indeed better than MSCOCO NoNeg.
> Why is Winground's random accuracy (25%) higher than the algorithms' accuracy [...]?
This is very common behaviour: Most models tested in the Winoground paper reach below random! An intuition: Many VL models behave similar to a bag-of-words model, i.e. ignoring word order. So imagine a model that treats $c_0$ and $c_1$ (both containing same words in different order) as the same caption for many examples.
At the same it has a default prior for one of the images and prefers it with both captions.
Such behavior on many examples would lead to a below random score.
We hope we have sufficiently answered all of your comments point by point, and are happy to engage further on more questions! Given this, would you consider increasing your rating of our paper?
[1]: Your Diffusion Model is Secretly a Zero-Shot Classifier (Li et al., Arxiv 2023)
[2]: Text-to-image diffusion models are zero-shot classifiers (Clark & Jaini, ICLR MRL Workshop 2023) | Summary: This paper studies the discriminative capabilities of diffusion models measured by image-text matching. A new matching score computation enables text-based image retrieval beyond simply text retrieval in existing works. A new benchmark, augmented from existing image-text benchmarks, is proposed for researchers to evaluate current diffusion models.
Strengths: 1. DiffusionITM enables image retrieval by making simple and minimal changes to existing text retrieval methods. The method generates good results on simpler datasets and performs better than random on challenging ones.
2. The proposed benchmark covers diverse aspects but is also optimized for lightweight testing. Differences in a few methods have been demonstrated with the benchmark.
Weaknesses: 1. Evaluation on Flickr with CLIP retrieved negatives. This process might introduce bias towards or against CLIP based image diffusion models. For example, in Table 1a, CLIPRN50x64 obtains an accuracy of 71.9. According to line 545, does it mean that 28.1% of the cases CLIP does not select the correct positive? In order to fix this, maybe an evaluation against the original model-free Flickr retrieval metric is needed, e.g. measuring if CLIPRN50x64 or CLIP based method has an advantage/disadvantage compared with other methods, measured on the new simplified set vs. the original set. Similarly, the CLIP-based HardNeg training might be exploiting the bias, besides learning useful discrimination skills.
2. It is preferred to compare different kinds of diffusion model variants in the proposed benchmark and show some insights there. For example, one might see an advantage of T5 based text encoder in grammar/text related metrics, or one could study pixel based diffusion vs. latent diffusion etc. This comparison is challenging in obtaining the models but can provide more value to the community.
3. The current method makes an assumption that DiffusionITM, based on eq. 2 and 5, is a great way of ITM and will be the case for a while. However, given the field advancing so rapidly, researchers might discover a better zero-shot matching method that could infer quickly and outperform CLIP consistently for example. Then, at that time, why would researchers not adopt the original retrieval tasks, e.g. on Flickr with all images, but adopt GDBench with 10 images?
4. It is claimed that the proposed tuning strategy keeps the generation capability. Examples are shown in Figure 5 and positive human evaluation is shown, but is it possible to evaluate existing standard metrics and see the difference? FID might not apply as the tuning happens on COCO. One might even see better generation quality with discriminative tuning applied.
5. How is GDBench different from an ensemble of image-text matching evaluation metrics, except that it is made light-weight, e.g. measuring 10 samples. It seems DiffusionITM is a general enough method that can be evaluated on theoretically any image-text matching tasks. From another perspective, is it possible to show correlation of the GDBench metrics with real compositional generation quality, e.g. measured by human evaluation.
6. Given the limited samples in some of the evaluation datasets, what are the standard deviation for different runs? From Figure 9, it looks noisy.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Table 1 typo: Difussion.
And see the weaknesses section for questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed comments and effort, showing that you engaged with our work! We are happy to note some of your positive feedback such as our method working with “minimal changes” and how “light-weight GDBench covers diverse phenomena”.
Similarly other reviewers noted:
- “Tackling a challenging problem, i.e., efficient quantitative evaluation of image generation, with a simple yet effective method”
- “ The finding of the relative difference between with and without text conditions is pretty interesting."
- " provides a useful tool for the research community and enables comparative analysis."
- "well written and organized with clear logic to read"
- “hard negative finetuning method in particular is intuitive yet highly useful”
- “Extensive ablations and experiments on variants of HardNeg, comparison between CLIP, BLIP, and Diffusion ITM, …”
We believe the primary criticism can be attributed to our paper not clearly emphasizing a few key contributions. We hope to clarify these contributions and address the reviewer's concerns below.
> “It is preferred to compare different kinds of diffusion model variants in the proposed benchmark and show some insights”
Based on your and R4's feedback, we are indeed **including more models and have experimented with the recently introduced Stable Diffusion XL** over the rebuttal period.
SD-XL comes with new hyperparameters & preprocessing and we are therefore still confirming our results and interesting findings. This is why we decided to not present rushed preliminary numbers here in the PDF but will do so in camera ready after further investigation.
We are not aware of any **published** open-source model using T5 text encoders but that would be an interesting addition. The only model we are aware of is the recently introduced DeepFloyd IF (without paper).
At the time of writing the paper both models had not been released and most competitive open-source models are variations of SD.
> [...] Researchers might discover a better zero-shot matching method that could infer quickly and outperform CLIP consistently for example. [...] Why would researchers not adopt [...] Flickr with all images but adopt GDBench with 10 images?
While we do hope GDBench will be used by researchers, we can't predict the future for several years (realistically benchmarks fade but the GDBench might inspire the direction of future ones). For now we predict diffusion-based methods will stick for a while and will not become as fast as CLIP whose speed comes from encoding text & image separately. Any generative-model-turned-discriminative by definition needs to encode both together (=slow). So such a model should be considered as a second-step slow retriever after a fast simpler retriever (CLIP) has narrowed down the selection (details in [Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers](https://arxiv.org/abs/2103.16553)). Recent trends in VL evaluation point towards prioritizing few high-qualitative examples over many easy examples. Therefore **future researchers will get the most value from evaluating few harder examples**, while also saving time with a slower model.
> Evaluation on Flickr with CLIP retrieved negatives [...] might introduce bias towards or against CLIP based models
We acknowledge this is not properly explained in the current version of the paper. However, we think a better research question in this situation is: "If CLIP was a first-step fast retriever to narrow down the search, could SD as a slow retriever improve upon re-using CLIP for the second step as well?" This was in fact an implicit motivation of us but not clearly enough stated in the paper. We will add a paragraph to the paper with a disclaimer as well as an explanation how these numbers are still insightful for the stated reason. While other experiments took priority over the last week, we plan to study another way to retrieve hard negatives (i.e. BERT-embeddings) for camera ready.
For the finetuning we followed prior established methods and believe this has negligible bias.
> Is it possible to evaluate existing standard metrics [for generation] and see the difference?
Our main goal of this paper is to **study higher-level semantic skills of these models which are not adequately captured by older standard metrics**. We therefore focus on small-scale human evaluation to validate our claim of on-par or better generation (see DrawBench analysis in Appendix and Global Rebuttal here). FID score would be insightful but, as explained in the global rebuttal, we are focusing on other high-priority experiments in this short time frame.
> How is GDBench different from an ensemble of image-text matching evaluation metrics?
GDBench is not a fully new dataset but here are some contributions:
1. We did make an effort to a) generate CLEVR again (not public before), b) make Flickr30K feasible and c) assemble the bias dataset.
2. Easy setup with our repo
3. Selecting tasks based on coverage+diversity of skills as well as feasibility criteria and recent trends, so that other researchers will not have to re-think this (like GLUE in NLP).
But it is not set in stone and based on suggestions we plan to also include ObjectNet.
On top, we are including more models to show how GDBench performance correlates with image-text-alignment ratings.
> What are the standard deviations[...]? Fig. 9 looks noisy.
Fig. 9 was generated with less than 200 of examples which still illustrates the point but we will fix the figure caption to mention this.
We did not have time during rebuttal to compute std on all data but studied Winoground (as the smallest dataset) with a small sample size of 10 noise-timestep samples per datapoint, i.e. where we expect high std. Result: $31.5$% $\pm 0.43$.
We hope we have sufficiently answered all of the reviewer’s comments point by point, and are happy to engage further on more questions! Considering this, we hope the reviewer increases their rating of our paper. | Rebuttal 1:
Rebuttal: Dear Reviewers and Area Chairs,
First we would like to thank all reviewers for writing detailed and thoughtful responses. It raised interesting discussions among the authors and will make it an overall stronger paper. In particular, we are grateful that Reviewer JbKR gave us a a score of 7 with high confidence and very detailed technical feedback that showed his expertise in this domain.
Reviewers pointed out the following strengths of our paper:
1. **Method contributions**: “hard negative finetuning method in particular is intuitive yet highly useful” (JbKR), “two technical contributions (unconditional normalization & tuning on MS COCO) are effective and well ablated” (T2Yx) and "finding of the relative difference between the with and without text conditions (Fig. 3) is pretty interesting" (ttjT)
2. **Analysis**: “evaluated extremely thoroughly in its effect on both discriminative and generative behavior" (JbKR) and “extensive ablations and experiments on variants of HardNeg, comparison between CLIP, BLIP, and Diffusion ITM, and the relationship between the number of timesteps and the performance” (JbKR)
3. **Evaluation contribution**: “Tackling a challenging problem, i.e., efficient quantitative evaluation of image generation models, with a simple yet effective method” (cFkA), “benchmark covers diverse aspects but is also optimized for lightweight testing” (jr2c), “paper studied bias [...], which should draw more attention to the community” (T2Yx)
4. **Presentation**: “Well written and easy to read” (cFkA) and "well written and organized with clear logic to read" (ttjT)
On top of responding to concerns, we also addressed common feedback from reviewers with further empirical investigations, summarized here:
1. Reviewer JbKR pointed out that an **additional stronger baseline** (CLIP ViT-H/14) is needed. The results are shown in the Rebuttal PDF (Tab. 1). The baseline is stronger overall and we will address this in the camera ready version.
2. Reviewer T2Yx asked about why we highlighted the **finetuning setup *HardNeg* over the *NoNeg* setup as our strongest model**. While performance on our discriminative benchmark is very close, we did conduct an additional human judgement study where we found that HardNeg has stronger image-text-alignment when it comes to **generative** performance (see details in rebuttal response to T2Yx and sample images in Rebuttal PDF)
3. Both jr2c and cFkA suggested to **evaluate more models** for reasons such as a) insights on model variants (i.e. T5 text encoder) and b) evidence that GDBench correlates with generative performance. Most strong open-source models boild down to variants of Stable Diffusion and we therefore spent most of this week evaluating the recently introduced Stable Diffusion XL. SD-XL comes with new hyperparameters and pre-processing steps and we are therefore still confirming our results and some interesting findings. This is why we decided to not present rushed preliminary numbers here but will do so in camera ready after further investigation. We are not aware of any open-source model (that also comes with a paper) using T5 text encoders but that would be an interesting addition. The only model we are aware of is the recently introduced [DeepFloyd IF](https://github.com/deep-floyd/IF) (without paper). **At the time of writing the paper both models had not been released.**
4. We provide standard deviation for jr2c.
Due to the short time frame we had to prioritize the most insightful experiments and responded to other comments in the rebuttal responses. Here the commom threads were the computational/speed cost of our method (JbKR, T2Yx, ttjT) and including more datasets (cFkA, T2Yx):
1. **Computational cost**: We agree with reviewers that computational cost is an important issue, and had previously discussed it in the Appendix. For the final paper we will move the discussion from the Appendix to the main paper, expand the analysis, and cite findings from other papers more prominently:
Both concurrent works study time cost/efficiency analysis thoroughly, especially Li et al. [2023], and our paper focused on additional contributions, benefiting from their findings. Hence this is already covered in previous literature.
We also emphasized that models such as DiffusionITM are not intended to be a fast retriever such as CLIP but more a “second-step slow retriever” that comes into play once a fast retriever has narrowed down the options to the hardest candidates. This is common practice (see [Miech et al., 2021](https://arxiv.org/abs/2103.16553)).
2. **More datasets**: Two reviewers suggested more vision tasks. Our **goal and contribution is primarily to study more complex vision-and-language reasoning** which is why we did not include ImageNet containing simple single-object images. These vision-focused tasks were already studied in detail in related work like [Li 2023](https://arxiv.org/abs/2303.16203) and to our knowledge most recent models of vision-and-language also focus less on object recognition. Fair comparison (raised by T2Yx) to previous work is important, which is why we include CLEVR (a complex reasoning task) and put in effort to generate images+text as close to [2] as possible ([2] did not publish their dataset). We do see value in ObjectNet as proposed by cFkA and will consider it for camera ready.
We plan on studying the following for a potential camera ready version:
- bias in our modified Flickr30K task (i.e. how to choose hard negatives for Flickr30K)
- include ObjectNet as an additional task
As much as we would like to run all possible experiments, it is unclear whether we will have enough time to fulfill all other suggestions. However we will mention these points in the limitations section!
Overall, we believe we addressed the main criticisms and engaged faithfully and constructively with the feedback. We are looking forward to fruitful exchanges the next weeks to further improve the work!
Pdf: /pdf/430f38cb2eee9bd9d92b358ff5f3a3c9264cc969.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes novel techniques for improving the performance of text-to-image diffusion models on zero-shot image-text matching tasks. They first propose subtracting the unconditional denoising error $\|\epsilon - \epsilon_\theta(x_t, t)\|_2^2$, which reduces the problem in image retrieval where one image is a priori far more likely under all captions. The also propose using a hard negative finetuning loss to increase the denoising error on hard negatives, which improves its discriminative (and generative) capabilities. Finally, they assemble GDBench, a benchmark of 7 existing datasets that they use to quantify different aspects of text-image alignment performance. They find that their proposed model significantly improves on prior work that uses text-to-image diffusion models (Diffusion Classifier), and is competitive with CLIP ResNet50x64. They also benchmark model bias and find that Stable Diffusion 2.1 is less biased than Stable Diffusion 1.5 and CLIP RN50x64.
Strengths: - The paper proposes 2 novel modifications that significantly improve performance over Diffusion Classifier (a prior work on using diffusion models for text-image matching tasks).
- The hard negative finetuning method in particular is intuitive yet highly useful. It is also evaluated extremely thoroughly in its effect on both discriminative and generative behavior.
- Authors also gather a set of 8 image-text-matching tasks into a proposed GDBench benchmark for easily measuring and comparing the performance of different ITM methods.
- Authors do a thorough analysis of the bias of Diffusion ITM, across versions of Stable Diffusion, as well as in comparison to a few CLIP models.
- Extensive ablations and experiments on variants of HardNeg, comparison between CLIP, BLIP, and Diffusion ITM, and the relationship between the number of timesteps and the performance of the method.
Weaknesses: - Computational: The proposed method runs in $O(n)$, where $n$ is the number of candidate images or text captions to choose from. This becomes highly impractical for real retrieval applications, compared to approaches like CLIP where fast approximate nearest neighbor solutions work extremely efficiently.
- Baselines: while the method does outperform previous work on using diffusion models (e.g., Diffusion Classifier), it doesn't compare against the strongest discriminative baselines, namely OpenCLIP Vit-H/14. This is fair since Stable Diffusion 2 actually uses this model as its text encoder. The paper currently only compares against CLIP ResNet50x64, which is quite weak in comparison.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: **Main questions/issues**:
- How well does OpenCLIP ViT-H/14 do on these ITM tasks? This is the most relevant discriminative baseline since SD 2 uses its text encoder.
- The Flickr30K benchmark is biased against the CLIP model used to produce it. The CLIP model is used to produce the top 10 candidates, and *the correct candidate is added if it's not among the top 10*. If the model being evaluated is CLIP, it has to choose between 9-10 hard negatives, whereas other models will receive a boost from a weak ensembling effect with CLIP. I'd suggest another way of creating the 10 candidates without correlation to any model output.
- Equation 6: is there a major typo here? This appears to still minimize $\mathcal L_{neg}$.
- L 246: Hard negative finetuning uses $\lambda=-1$? Why is it negative when it's only used for $clip(\mathcal L_{neg}, |\lambda \mathcal L_{pos}|)$? Does this mean that $\mathcal L_{neg}$ is clipped to the range $[-\mathcal L_{pos}, \mathcal L_{pos}]$? Something is wrong here.
**Minor questions/comments**:
- Table 1b: Where does the 37.5 score come from with Diffusion Classifier/DiffusionITM on Winoground text? The Diffusion Classifier paper reports 34.0.
- Page 6, footnote 2: This is not too important, since the prompt is fixed across methods, but Pets prompt template used in the CLIP paper is "a photo of a {}, a type of pet." This probably increases scores across the board for all methods.
- L240-241: "we drop the complicated procedure that iteratively prunes classes after a number of noise-timestep samples" -- Diffusion Classifier doesn't use its adaptive strategy for its ITM experiments either, so its evaluation strategy matches DIffusionITM here.
- Appendix G (runtime) mentions that "batch size has diminishing returns in terms of runtime after batchsize=4." However, based on the Diffusion Classifier github, using FlashAttention + FP16 + `torch.inference_mode()` + batchsize=32 achieves about 80 evals/s on an A6000. I'd suggest double checking your implementation. Computation speed should be an issue overall with this method, but not to this level.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, authors address the main limitations of their paper and method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We are thankful for your detailed and extremely thorough review that shows you have engaged with the work and are very confident with the subject area!
Thank you as well for highlighting strengths of the paper such as:
- "The hard negative finetuning method in particular is intuitive yet highly useful. It is also evaluated extremely thoroughly in its effect on both discriminative and generative behavior."
- "do a thorough analysis of the bias of Diffusion ITM, across versions of Stable Diffusion, as well as in comparison to a few CLIP models."
- "Extensive ablations and experiments on variants of HardNeg, comparison between CLIP, BLIP, and Diffusion ITM, and the relationship between the number of timesteps and the performance"
Similarly other reviewers also noted:
- “Tackling a challenging problem, i.e., efficient quantitative evaluation of image generation models, with a simple yet effective method [...]”
- "[...] This innovation has practical value to apply. The finding of the relative difference between the with and without text conditions (Fig. 3) is pretty interesting."
- "new benchmark for image generation models over the discriminative tasks provides a useful tool for the research community and enables comparative analysis."
- "well written and organized with clear logic to read" (another reviewer: “well written and easy to read”)
We appreciate your very detailed comments with suggestions that even went into implementation details! We will address them below:
> "How well does OpenCLIP ViT-H/14 do on these ITM tasks?"
We fixed this valid concern by **running the ViT-L/14 baseline on all tasks** and will include it in Table 1 of the paper (and also the Rebuttal PDF here). ViT-H/14 seemed to be temporarily unavailable the last few days for download so we opted for the closest options based on the [OpenCLIP repo](https://github.com/mlfoundations/open_clip). The short answer is: It does better than our original baseline, as also shown in DiffusionClassifier, and we will discuss this in the camera ready paper.
> "The Flickr30K benchmark is biased against the CLIP model used to produce it"
Thank you for raising this concern! We acknowledge this is not properly explained in the current version of the paper and is therefore misleading. A **better research question in this situation might be**: "If CLIP was a first-step fast retriever to narrow down the search, could SD as a slow retriever improve upon re-using CLIP for the second step as well?" (see [Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers](https://arxiv.org/abs/2103.16553) for in-depth discussion) This was an implicit motivation of ours but not clearly enough stated in the paper. We will add a paragraph to the paper with a **disclaimer as well as an explanation how these numbers are still insightful for the stated reason**. While other experiments took priority over the last week, we plan to study another way to retrieve hard negatives via BERT-embedding NN search for camera ready.
Thank you for the smaller fixes and comments!
You are right that $\lambda$ should not be $-1$. We will fix this!
> "Where does the 37.5 score come from with Diffusion Classifier/DiffusionITM on Winoground text?"
We evaluated with 250 samples of uniform timesteps and beyond that we are not sure how to explain the difference. There is a small standard deviation and after a quick search in the DiffusionClassifier paper we did not find how many samples DiffusionClassifier used for Winoground.
Regarding your suggestions regarding speed, we already used fp16 and now we also tried FlashAttention with small speed gains. Thank you! For our final repository or follow-up work we will optimize speed further. We do not have an A6000 unfortunately. We ran the DiffusionClassifier repo on our side the last days (to check something with Stable Diffusion XL) and it seemed comparably slow to ours but hard to tell due to the adaptive strategy!
We hope we have sufficiently answered all of your comments point by point, and are happy to engage further on more questions! Considering this, we hope you increase your rating of our paper, and champion it for publication at this conference.
---
Rebuttal Comment 1.1:
Comment: - OpenCLIP ViT-H/14 -- I just tried downloading it via the OpenCLIP repo and it seems to be available now. Could you run this baseline?
- Glad my smaller comments (like $\lambda$ for the hard negative finetuning) were helpful!
- Could you report the mean and variance of the DiffusionITM/Diffusion Classifier Winoground text score evaluation with 5 random seeds? I'm curious how much the randomness in evaluation affects the score, especially since there are so few examples in Winoground. This might also be a good idea for the other benchmarks where there are fewer test examples.
- I brought up Diffusion Classifier's A6000 inference speed since your Appendix G mentioned using an A6000 for inference. I'm still curious about the runtime of your implementation. How long does it take to evaluate a single Winoground text score (4 image-caption pairs x 250 timesteps = 1000 total evaluations) with your implementation?
Hopefully the authors can add these results if they have time. If not, I still really like this paper and have increased my score.
---
Reply to Comment 1.1.1:
Comment: - ViT-H/14 works now! Loading it a week ago threw an error. We will run it soon for all datasets.
- In response to reviewer jr2c, we computed std for Winoground but with 10 timesteps (which means higher variance) for 5 seeds. This was our response: "We did not have time during rebuttal to compute std on all data but studied Winoground (as the smallest dataset) with a small sample size of 10 noise-timestep samples per datapoint, i.e. where we expect high std. Result: $31.45$ % $\pm 0.43$"
- We just ran our code with batchsize=1, 250 timesteps and a single GPU for 4 pairs, as you suggested. It took around 71 seconds, so 18 seconds per pair. This would be faster with multi-GPU and slightly faster with larger batchsize.
Thank you again for your time and trust in the paper! | null | null | null | null | null | null |
Prototypical Variational Autoencoder for 3D Few-shot Object Detection | Accept (poster) | Summary: This paper proposes a novel approach for few-shot 3D object detection by combining prototype learning and variational autoencoders. To address the weak geometry regularization and data imbalance issues of the existing methods, it proposes a novel VAE specifically designed for prototype learning named Prototypical VAE (P-VAE). Moreover, it extends P-VAE with geometric-informative prototypes and class-specific prototypes to enhance the object detection performance in the few-shot regime. Experimental results on several benchmarks validate the effectiveness of the proposed method.
Strengths: 1. This paper proposes two prototypical VAE based on geometric-informative and class-specific prototypes, which helps solve the weak geometry regularization and data imbalance issues encountered by the existing few-show 3D object detection methods based on prototype learning. This combination of prototype learning and VAE is novel and effective.
2. Experimental results showed strong support for the effectiveness of the proposed method in the few-shot regime. In particular, it shows consistent improvement over baseline methods on various settings of different benchmarks.
3. The paper is well written, with easy-to-follow equations and proper notations.
Weaknesses: 1. In order to detect objects from the scene point clouds, a number of instances should be predicted. However, it is not clear how to obtain the number of the predicted instances $N_{ins}$ from $N_z$ features $z'_i$.
2. There is no ablation study of the two feature calibration described in Eq-6 and Eq-9, how much do they contribute to the final performance improvement?
3. What are the object detection heads in Figure-2? How are they trained exactly?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please refer to the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations of the proposed method and potential solutions are discussed in the final section of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our submission carefully and for the insightful suggestions. We address the reviewer’s comments below. Also, we will release code upon the publication of this work.
**Q1: How to obtain the number of instances $N_{ins}$?**
A1: Given the per-point features $\\{z'_i\\} _{i=1}^{N_z}$, we use MLP to predict both feature offsets ($D$-dimension) and point offsets (3-dimension). We add the point offsets to the original point to get the shifted points, then adopt Furthest Point Sampling (FPS) to obtain $N_{ins}$ votes. $N_{ins}$ is produced by the network on-the-fly, depending on the input scenario. The above procedure is a clustering module following VoteNet[1], here we link pseudocode https://drive.google.com/file/d/1f_nBjNV_eWK90lP1Xya6ByVUszC97yNw/view?usp=sharing for your reference.
**Q2: Ablation studies for feature calibration.**
A2: Thank you for pointing this out. In GP- and CP-VAE, we use a cross-attention module (Eq. 6 and Eq. 9) to enrich the features with the prototype information. Our studies for calibration are inadequate in the submitted version. Here we add comparisons with three calibration approaches: concatenation[2], perturbed attention[3] and adaptive attention[4]. See Table A, we find that adaptive attention can achieve better performance than base attention. The reason may be, the soft attention with meta-reweighting strategy can localize and highlight the region of interest in query samples. We will modify our method carefully and thanks again for your suggestion.
| | | | | |
|:------------------------:|:---------:|:---------:|:---------:|:---------:|
| | 3-shot | 3-shot | 5-shot | 5-shot |
| | AP25 | AP50 | AP25 | AP50 |
| P-VAE | 31.60 | 19.37 | 32.84 | 22.39 |
| Concatenation\[2\] | 23.44 | 11.07 | 29.18 | 17.92 |
| Perturbed Attention\[3\] | 29.31 | 10.09 | 31.86 | 19.23 |
| Adaptive Attention\[4\] | **31.89** | **20.04** | **33.01** | **22.78** |
Table A: Ablations studies for feature calibration strategies on FS-ScanNet Split-1.
**Q3: Detail the detection heads. Besides, how to train the heads?**
A3: Thank you for pointing this out. Referring to the 'Object Detection Heads' in Figure 2 of the paper, the detection includes three steps:
(i) Normalization. Given $N_{ins}$ clusters, each cluster is denoted as $C = \\{c_i\\}_{i=1}^n$, where $n$ is the number of features assigned to $C$ through FPS (discussed in A1). $c_i$ is composed of two attributes: the $D$-dimension features $w'_i$ (see Figure 2 in the paper) and the 3-dimension shifted point coordination $p_i$. We calculate the clustering center $\\bar p=\\frac{1}{n} \\sum_{p=1}^n {p_i}$ then locally normalize $p_i$ to $p'_i=(p_i - \\bar p)/r$, where $r$ is the maximum distance to the center.
(ii) We concatenate $w'_i$ with $p'_i$ for the ($D$+3)-dimension $c'_i$, then use a 1-layer MLP on each $c'_i$, resulting in $n$ features. We collect them and use MaxPooling to get a single ($D$+3)-dimension feature $c$.
(iii) We use a 1-layer MLP on $c$ to obtain the final vector, which contains an objectness score, bounding box information (i.e., xyzlwh) and classification scores.
Similar to VoteNet, we use the standard 3D detection loss to supervise the head. We assign predictions within 0.3m to a GT as 'positive', and assign predictions far from any GT more than 0.6m as 'negative'. We use Cross-Entropy loss to supervise the pos/neg objectness. For each positive candidate, we take the nearest GT as its paired GT, which is used to supervise the box location and dimension with Smooth-L1 loss. We also use Cross-Entropy loss to supervise the classification score of each positive candidate.
We will add this part in the Implementation Details section of the revised version for clarification.
[1] Qi, Charles R., et al. "Deep Hough Voting for 3D Object Detection in Point Clouds." ICCV 2019.
[2] He, Shuting, et al. "Prototype Adaption and Projection for Few-and Zero-Shot 3D Point Cloud Semantic Segmentation." TIP 2023.
[3] Lu, Yu, et al. "Attention Calibration for Transformer in Neural Machine Translation." ACL-IJCNLP 2021.
[4] Jiang, Zihang, et al. "Few-Shot Classification via Adaptive Attention." arXiv preprint arXiv:2008.02465 (2020).
---
Rebuttal Comment 1.1:
Title: Post-rebuttal comment
Comment: I appreciate the author's feedback, which addressed my issues. Considering the general positive comments of other reviewers, I thus keep my original rating. | Summary: This paper studies a challenge task called FS3D. They first presents that the previous work on FS3D lacks fine-level supervision, as the intermediate features are simply averaged to update the prototypes, which are then used to augment features for sequential detection. In order to solve this problem, they leverage Prototypical Variational Autoencoder to conduct regularization on both local and global levels.
Strengths: 1. The paper is well written and easy to follow since the motivation is clear.
2. The experimental improvement is significant.
Weaknesses: 1. The related works on FSL for 3D Point Cloud is insufficient.
2. why does Prototypical Variational Autoencoder work in this task ? Have you tried other models for reconstruction ?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Please see the weakness
2. Can the proposed method be used to other few-shot learning tasks in point cloud, such as segmentation ?
My main concern is that why does author choose Prototypical Variational Autoencoder as the reconstruction model. Does it have outstanding property for FS3D ? It is the key for the acceptance of this paper.
if authors could address my concerns, I am willing to increase the score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our submission and for the insightful comments. We address the reviewer’s comments below.
**Q1: Survey on 3D FSL.**
A1: Thank you for pointing this out. Our work focuses on FS3D, we believe a more careful review on FSL for point cloud can lead to high-level insights. We categorize recent works into four families according to the FSL strategies. [10,14,16] incorporate self-supervision or semi-supervision. [6,7,8,9,13] regularize and align the latent space with the concept ‘prototype’. High-dimensional prototypes are used to cluster the embeddings while low-level prototypes represent basic geometric structures. [1,4,7,8,9,12,17] aim to evaluate the difference/similarity between base and novel pairs, such that the features can be separated more discriminatively. [2,3,5,11,15] enrich the limited 3D samples with data augmentation and multi-modality fusion. We will give a more detailed and comprehensive overview in the revised version.
Here we link the reference [1-17] https://drive.google.com/file/d/1WyLnRVP53627-qZ861wJpZgq241Slefi/view?usp=sharing.
**Q2: Why does P-VAE work? Why it is a better reconstruction model in comparison with other reconstruction approches?**
A2: Thank you for your careful revision. Parallel to the detection task, we add a reconstruction task for preserving 3D information in the latent space. This is a widely-used self-supervised learning scheme for 3D FSL. Compared with the original reconstruction model AE, P-VAE learns a multi-center distribution that each centers at a prototype, enabling us to sample from the probabilistic space. This design imitates the real 3D environments (a prototype can have lots of variants), and also tackles the data imbalance problem (although the novel samples are limited, we can augment features by sampling). See Table A for the quantitative results of P-VAE superior to AE. In fact, the key is to obtain a parameterized latent space in the reconstruction model. Inspired by your suggestion, we study reconstruction models[18,19]. In Table B, ‘w/’ and ‘w/o’ mean whether we use variational probabilistic learning or not. We modify a CP-VoteNet as our baseline. We can find that the extra reconstruction task gains only marginal improvement (regardless of the approach), whereas the incorporation of a parameterized feature space does matter.
For more discussion on how P-VAE helps FS3D, due to space limit please refer to this link https://drive.google.com/file/d/1EBdjNVKS194HKfJtCOCje4UPQ-fKj1ry/view?usp=sharing. We appreciate your patient reading.
| | | |
|:-------------------------------:|:---------:|:---------:|
| | AP25 | AP50 |
| Prototypical VoteNet + AE | 14.97 | 9.32 |
| P-VAE | **16.00** | **10.22** |
Table A: Performance of using P-VAE over [13] + AE on FS-ScanNet Split-1 (1-shot).
| | | |
|:----------------------:|:---------:|:---------:|
| | AP25 | AP50 |
| Baseline | 13.47 | 8.33 |
| \+ GAN\[18\] w/o | 14.13 | 8.92 |
| \+ GAN\[18\] w/ | 14.96 | 9.51 |
| \+ Diffusion\[19\] w/o | 14.20 | 9.05 |
| \+ Diffusion\[19\] w/ | 15.43 | 9.68 |
| P-VAE | **16.00** | **10.22** |
Table B: Reconstruction models on FS-ScanNet Split-1 (1-shot).
For GAN, we pass the voted instance-level features through an MLP to predict distribution parameters, then feed the sampled features into a generator[20] to reconstruct the object, while the discriminator produces per-point scores to distinguish our prediction and GT. For diffusion model, we follow the similar scheme to incorporate probabilistic modeling, and the shape latent is replaced by each voted feature.
[18] Goodfellow, Ian, et al. "Generative Adversarial Nets." NIPS 2014.
[19] Luo, Shitong, and Wei Hu. "Diffusion Probabilistic Models for 3D Point Cloud Generation." CVPR 2021.
[20] Li, Chun-Liang, et al. "Point Cloud GAN." arXiv preprint arXiv:1810.05795 (2018).
**Q3: Does P-VAE work for other FSL 3D tasks? e.g., Segmentation.**
A3: Yes. P-VAE is a general prototype learning scheme that can be plugged into many point cloud networks. Please refer to Supplementary C.5, where we can easily adapt P-VAE for various architectures.
[21] and [22] are grouping-based instance segmentation methods. We can deploy GP-VAE at the pre-grouping stage and CP-VAE at the post-grouping stage. [23] is a transformer-based network, we then deploy GP-VAE prior to the transformer decoder, and CP-VAE between the decoder and the mask module. Considering the computational cost, P-VAE is not preferred in the shallow layers.
We provide some preliminary results of [21] + plug-in P-VAE, comparing with a recent segmenter[24] on ScanNetv2. Please see Table C and Figure A https://drive.google.com/file/d/1fKREOe6VjEZmwRqWmLVxxRzFLpU8Rugs/view?usp=sharing. We also observe that our recall, especially for objects with larger sizes, is lower than [24]. The coarse implementation has not been thoroughly checked so we will finalize the results in the revised version.
| | | |
|:---------------------:|:--------:|:--------:|
| | mAP | AP50 |
| Geodesic-Former\[24\] | **10.6** | **19.8** |
| \[21\] + P-VAE | 8.04 | 15.35 |
Table C: Both methods are trained on Fold0 and tested on Fold1(1-shot).
[21] Jiang, Li, et al. "Pointgroup: Dual-set point grouping for 3d instance segmentation." CVPR 2020.
[22] Vu, Thang, et al. "Softgroup for 3d instance segmentation on point clouds." CVPR 2022.
[23] Schult, Jonas, et al. "Mask3D: Mask Transformer for 3D Semantic Instance Segmentation." ICRA 2023.
[24] Ngo, Tuan, and Khoi Nguyen. "Geodesic-Former: A Geodesic-Guided Few-Shot 3D Point Cloud Instance Segmenter." ECCV 2022. | Summary: The paper proposes an approach to enhance Few-Shot 3D Point Cloud Detection (FS3D) through prototype learning with VAEs. The authors leverage VAEs to learn prototypes represented by GMM-like distributions. Two VAEs are specifically designed to preserve geometric information and refine instance features. The effectiveness of the approach is validated through various experiments.
Strengths: 1.The paper introduces an approach utilizing VAEs to learn geometric and class-specific prototypes with improved performance on FS3D tasks.
2.The effectiveness of the modules and prototypes is demonstrated through various experiments.
3.The writing is easy to follow.
Weaknesses: 1.The authors do not provide a comprehensive and detailed discussion of their motivations for the proposed approach. This should include a clear identification of the core issues, a thorough evaluation for the limitations of previous works in addressing this problem, and a convincing explanation of how incorporating VAEs can effectively tackle these limitations. It would greatly enhance the paper's credibility if qualitative and quantitative results were presented to support their motivations and demonstrate the superiority of the proposed method.
2.The sensitivity of the number of geometric-informative prototypes as observed in Table 6 raises concerns about the practical applicability of the proposed approach. It would be beneficial if the authors discussed the implications of this sensitivity and provided insights on how to determine an optimal or adaptive number of Geo-proto for real-world applications.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1.The paper does not explain the reason why prototypes learned by previous methods lose substantial 3D information and become less geometric-informative.
2.The motivation for using VAEs is not fully stated.
3.Are there any empirical results to demonstrate the underrepresentation / overfitting problem of prototypes?
4.Why can GMM-like distribution handle the overfitting problem of the latent space?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: 1.The authors have addressed the limitations that data-rooted problem will lead the representative prototypes to distort and perturb the latent space negatively.
2.Additionally, the proposed approach depends on the number of prototypes. The sensitivity raises concerns about the practical applicability of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our submission and for the valuable comments. We address the reviewer’s comments below.
**Q1: Why are the prototypes learnt by previous methods less geometric-informative? More detailed discussions about the limitations of previous works.**
A1: Thanks for your in-depth comments. The typical FS3D methods related to geometric prototypes are [1][2]. Since [2] do not explicitly learn the prototypes, we will mainly discuss [1] and briefly mention [2].
[1] learns geometric prototypes in the encoder embedding space, this high-dimensional disordered space (where they group features and use average each group as one prototype) is directly connected to the down-stream task. The framework is supervised by only a task loss, which is specified to fit the target GT (i.e., 3D boxes). Without extra regularization, the deep network loses control of the latent space. The linked Figure A {https://drive.google.com/file/d/1SVDLQj6F0wXsO-kO0-HuYg-0ERVaCkNW/view?usp=sharing} illustrates how geometric prototypes can construct a whole scenario (please see Supplementary B.2 for implementation details). Each prototype is painted with a unique color. Figure A demonstrates that the latent space of [1] fails to preserve meaningful 3D information. The geometric prototypes are not well-organized and show high randomness. In contrast, our prototypes retain reasonable physics structures and semantic consistency.
The prototypes in [2] are not learnable features but 3D point clouds. These low-level prototypes are generated from random points through affine transformations, then clustered into categories by paired similarity. Please see Figure B https://drive.google.com/file/d/1xTdfyuL8mtXrlGED2-lsaQWBUqFDFB-x/view?usp=sharing , such prototypes can be very unrealistic, based on our common sense for real-world objects.
**Q2: The motivation of using VAE.**
A2: The core of FS3D is understanding 3D scenes with the least data requirement. We detail the motivation from four aspects.
(i) For Geometric-informative Prototypes (GP), we use the set of a fixed number of GP to describe the entire 3D world, thus we expect each GP can have different shape variants given different contexts (e.g., the 'stick' GP can be short in a chair but long in a desk). VAE allows us to sample diverse features based on the prototypes (Gaussian centers), thus can provide a wide range of variants.
(ii) For Class-specific Prototypes (CP), the base-class prototypes can be superior to the novel-class ones because of data imbalance, shown in Figure C of A3. We therefore use VAE for alignment. The parameterized latent space can help augment data through covering various features around a prototype center.
(iii) Particularly for the novel classes CP, since we access very limited samples, we cannot fully understand all the shape variants of a category, as shown in Figure D https://drive.google.com/file/d/1k0bP29YyCBvvErqhcCCSZQ5S3q-bz-NB/view?usp=sharing. The distribution space of VAE retains higher feature transferability, such that we can learn class-specific prototypes instead of instance-specific ones.
(iv) VAE is a generative model, its reconstruction task can be a regularization scheme to preserve 3D information of the latent space, as we discussed in A1.
**Q3: Results that can explain the underrepresentation/overfitting of prototypes.**
A3: Please see Figure C https://drive.google.com/file/d/13xzTPOpgCTA_FqKbGgA6C0AWD6YuhpEJ/view?usp=sharing for comparison results. Due to the data imbalance problem, for the baseline[1], we can observe that the quality of novel-class prototypes is significantly worse than that of base classes, whereas P-VAE can generate equally convincing prototypes without much bias.
As an extension for A2 (iii), we conduct the following experiments. Given the novel class ‘Table’, we split the training samples into two sets w.r.t. their shapes: Standard Tables (i.e., the four-leg tables with a rectangular tabletop, as we can commonly find in the real world), and; Strange Tables (i.e., ones with a round or irregular-shaped top, supported by a steady base instead of four sticks). The networks are trained on one type and tested on another. The quantitative results are in Table A. [1] fits the given instance and cannot generalize for all tables, whereas P-VAE learns the generalized category information thus can better transfer to other table-like shapes.
| | | | | |
|:-------------------------:|:-------------------------------:|:-------------------------------:|:-------------------------------:|:-------------------------------:|
| | Train on Table1, Test on Table2 | Train on Table1, Test on Table2 | Train on Table2, Test on Table1 | Train on Table2, Test on Table1 |
| | AP25 | AP50 | AP25 | AP50 |
| Prototypical VoteNet\[1\] | 18.68 | 7.01 | 16.92 | 6.72 |
| P-VAE | **23.05** | **10.44** | **21.97** | **9.41** |
Table A: Using Standard-shaped table (Table1) and Strange-shaped table (Table2) for training/testing on FS-ScanNet Split-2 (5-shot).
**Q4: Why can the GMM-like distribution handle the overfitting problem?** & **Q5: More discussion on the number of prototypes (related to Table 6 in the paper). How sensitive is it? How to determine an optimal number?**
A4 & A5 & Reference: Due to space limitation, we link the response here https://drive.google.com/file/d/1FVx8bJ6zxUTpsBfvExtRuwnaF4MvbOhH/view?usp=sharing. We appreciate your patient reading.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' efforts and responses. However, I remain unconvinced by the explanations provided for Q2 and Q4, as they do not offer additional insights. Therefore, I will maintain my rating as borderline accept. | Summary: The paper introduces Prototypical Variational Autoencoder (P-VAE) for Few-Shot 3D Point Cloud Object Detection. It tackles the preservation of geometric information and data imbalance through learning distribution parameters. The authors propose two extensions, GP-VAE and CP-VAE, focusing on geometry and class specificity. Experiments demonstrate improved performance over state-of-the-art methods.
Strengths: 1. The P-VAE is innovative, focusing on learning distribution parameters instead of features, which is useful for few-shot learning.
2. The encoder-decoder architecture effectively preserves geometric information, critical for 3D detection.
3. The paper includes a robust experimental evaluation, showing performance gains and insights into prototype contributions.
Weaknesses: 1. The paper does not include experiments to assess the effectiveness and generalization of P-VAE across different domains on few shot learning, despite P-VAE being one of the main contributions.
2. The observed improvements in the main experiments are relatively marginal, particularly in regard to the evaluation metric AP25.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time and reviewing our paper. Overall, most of the review comments request clarifications and minor revision on the paper. We will carefully revise the paper accordingly. Also, we will release code upon the publication of this work.
**Q1: Effectiveness and generalization of P-VAE across different datasets.**
A1: Thanks for your insightful suggestion. Cross-dataset validation is indeed critical to address the distribution gap between point cloud data, which can be commonly observed in the real world. Inspired by your suggestion, we conduct a series of experiments across the FS-ScanNet and FS-SUNRGBD datasets. We compare the AP results on not only VoteNet [1] (the vanilla detector) and Prototypical VoteNet [2] (the few-shot learning detector) but also an implemented transferable P-VAE+. Since Cross-domain Few-shot learning (CDFS) for point cloud object detection is under-explored, we adapt a 2D CDFS method [3], which utilizes high-order self-supervision to regularize the latent space, the training scheme follows [4]. Please see Table A-C below for the details.
- Part 1 Train on FS-ScanNet Split-1 then test on FS-SUNRGBD (5-shot).
| |AP25|AP50|
|:---:|:---:|:---:|
|VoteNet[1] + FineTune|20.66|1.64|
|Prototypical VoteNet[2]|21.95|1.77|
|P-VAE|23.49|**1.91**|
|P-VAE+|**23.52**|1.87|
Table A: Train on base classes in FS-ScanNet split-1 (Cabinet, Bed, Chair, Sofa, Table, Door, Picture, Counter, Desk, Curtain, Refrigerator, ShowerCurtain, and Sink), then test on novel classes in FS-SUNRGBD (Toilet and Night Stand).
- Part 2 Train on FS-ScanNet Split-2 then test on FS-SUNRGBD (5-shot).
| |AP25|AP50|
|:---:|:---:|:---:|
|VoteNet[1] + FineTune|24.92|13.08|
|Prototypical VoteNet[2]|26.25|18.90|
|P-VAE|28.67|19.30|
|P-VAE+|**29.02**|**19.76**|
Table B: Train on split-2 base classes in FS-ScanNet (Cabinet, Chair, Sofa, Toilet, Door, Picture, Counter, Desk, Curtain, Refrigerator, ShowerCurtain, and Sink), then test on novel classes in FS-SUNRGBD (Bed, Table, and Night Stand).
- Part 3 Train on FS-SUNRGBD then test on FS-ScanNet (5-shot).
| |AP25|AP50|
|:---:|:---:|:---:|
|VoteNet[1] + FT|23.18|2.98|
|Prototypical VoteNet[2]|25.93|3.54|
|P-VAE|26.17|3.73|
|P-VAE+|**26.73**|**3.76**|
Table C: Train on base classes in FS-SUNRGBD (Sofa, Chair, Desk, Dresser, Bookshelf, and Bathtub), then test on novel classes in FS-ScanNet (Bed, Table, Toilet, and Garbagebin).
We will survey other CDFS approaches and exploit them on point cloud object detection. In the revised version, we will provide additional cross-dataset results to support further studies on 3D CDFS.
**Q2: Explanation for the marginal improvements, especially on AP25.**
A2: Thank you for your revision on the experiment section. For both benchmarks, our method achieves consistently top on the quantitative results, as shown in Tables 1 and 2 in the paper.
(1) As for those comparable ones, in fact P-VAE gains significant improvements on the common-in-real-world categories. Please see Table D. These objects frequently appear in daily life, thus their precisions are critical for practical usages. Please also see the qualitative results in the linked Figure A https://drive.google.com/file/d/1_yfH-zu4PIa-vFGEr9AZ-0FF4bcGggRO/view?usp=sharing (top: FS-ScanNet and bottom: FS-SUNRGBD), where P-VAE predicts comparable or less boxes, but with superior accuracy (similar xyz location, but better lwh dimensions).
(2) This result of Figure A is in accord with your observation, that our improvements on AP50 are stronger than AP25. AP50 is a stricter metric. In some cases, [2] predicts a coarse box with obvious deviation, though it can be counted for AP25, it is actually a false positive for AP50.
(3) As for your suggestion on AP25, a possible solution for producing more predicted candidates is to augment novel samples with PointMix [5] then use the post-augmentation data for self-ensembling training[6]. The implementation is still on-going so we will provide the results in the revised version.
| | | | | |
|:-------------:|:------------------:|:------------------:|:-------------------------:|:-------------------------:|
| Category | P-VAE | P-VAE | Prototypical VoteNet\[2\] | Prototypical VoteNet\[2\] |
| | AP25 | AP50 | AP25 | AP50 |
| Bathtub | **42.99 (+9.97)** | **35.75 (+13.71)** | 33.02 | 22.04 |
| Toilet | **45.51 (+6.61)** | **19.31 (+3.80)** | 38.80 | 16.23 |
| ShowerCurtain | **29.14 (+13.80)** | **4.98 (+1.26)** | 15.34 | 3.72 |
| Sofa | **14.27 (+2.31)** | **9.81 (+3.35)** | 11.96 | 6.46 |
Table D: Results on common-in-real-world classes on FS-ScaNet Split-1(1-shot).
[1] Qi, Charles R., et al. "Deep Hough voting for 3D Object Detection in Point Clouds." CVPR 2019.
[2] Zhao, Shizhen, and Xiaojuan Qi. "Prototypical VoteNet for Few-Shot 3D Point Cloud Object Detection." NeurIPS 2022.
[3] Yuan, Wang, et al. "Task-level Self-Supervision for Cross-Domain Few-Shot Learning." AAAI 2022.
[4] Oh, Jaehoon, et al. "Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty." NeurIPS 2022.
[5] Chen, Yunlu, et al. "PointMixup: Augmentation for Point Clouds." ECCV 2020.
[6] Zhao, Na, Tat-Seng Chua, and Gim Hee Lee. "SESS: Self-Ensembling Semi-Supervised 3D Object Detection." CVPR 2020.
---
Rebuttal Comment 1.1:
Comment: I read the author's rebuttal carefully and it solves my concerns well. For question 1, although the improvement is relatively marginal, the author did conduct more experiments on different datasets to showcase the effectiveness and generalization of the proposed method. For question 2, the author clarifies that the performance improvement of AP50 is good and AP50 is a stricter metric. Therefore, I will raise my rating to weak accept. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data | Accept (poster) | Summary: The authors introduced the concept of sparsified MDP. Based on this concept, they proposed a new algorithm that takes as input a dataset, uses it to design and deploy a non-reactive exploratory policy, and then outputs a locally near-optimal policy. A nearly minimax-optimal upper bound for the sample complexity is also established to learn a local ε-optimal policy using this algorithm.
Strengths: - This paper proposed a new setting where the agent is access to an offline dataset and can further explore the environment online with a non-reactive policy. In this way, the agent can utilize the offline dataset while avoid the engineering costs associated with switching policies.
- Sparsified MDP provides an perspective on combining both optimism and pessimism principle, which is interesting.
- The authors obtain the sample complexity bound of the new approach, and the bound is tighter than previous method.
- The paper is well-written and easy to follow.
Weaknesses: - The algorithm is based on tabular setting, which limits its application.
- The exploratory policy only explore in the sparsified MDP. In this way, the final policy is strictly limited by the offline dataset even if the budget for online interaction is large.
- The transitions that cannot achieve the threshold is dropped directly, even though they contain some information of the environment.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Is it possible for the exploratory policy to explore out of the coverage of offline dataset? I know it is impossible if there is no data at all, but it seems that using the information of the dropped transitions may help.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are not explicitly discussed by the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions on our paper! We will try to answer your questions below.
1. Q: The algorithm is based on tabular setting, which limits its application.
A: Indeed our paper is a first step for exploration with a non-reactive policy which is computed with the help of offline data.
Although we could have tackled the setting of function approximation, doing so would have involved a number of technical considerations that might have made the result more cluttered and less clear for a first paper on such topics. We do agree with the reviewer that the extensions to the function approximation setting are important future directions, and we will mention this in the conclusion.
2. Q: The exploratory policy only explores in the sparsified MDP. In this way, the final policy is strictly limited by the offline dataset even if the budget for online interaction is large.
A: The reviewer’s observation is correct, and this is a limitation of exploration with non-reactive policies. In the worst case, one cannot explore the regions not covered by the offline dataset because they may contain hard-to-explore combinatorial structures which command an exponential sample complexity[1]. More practical problems often do not have such worst-case structure; however what can be achieved in those cases is problem dependent and can be an interesting direction for future research.
3. Q: The transitions that cannot achieve the threshold are dropped directly, even though they contain some information about the environment. Is it possible for the exploratory policy to explore out of the coverage of offline dataset? I know it is impossible if there is no data at all, but it seems that using the information of the dropped transitions may help.
[1] Xiao et al, The curse of passive data collection in batch reinforcement learning.
A: The reviewer is correct that some transitions are dropped even if in certain cases they may contain enough data about the environment. For example, some environments are nearly deterministic, and so even a few samples (below the value of our threshold) would suffice to explore. However, how to explore the regions that do not have enough data, and whether it is even possible to do so, depends on the particular structure of the problem. Including these considerations would have made the algorithm more complex and the analysis more cluttered and so we settled for a ‘hard’ threshold which is simpler but still very effective.
4. Limitations are not explicitly discussed by the paper.
A: Thanks for your advice on the writing! We will definitely add a conclusion with emphasis on the significance as well as the limitations of our works to the next version of paper.
To summarize, in this paper we leverage offline data to conduct exploration using non-reactive policies. The key contributions lie in the originality of the setup and in the mathematical work to describe the conditions that need to be met for sub-optimality guarantees. Algorithmically, this is achieved by a novel blending of the principle of optimism and pessimism to design the exploration policy in a way that is probably efficient. Extending these algorithmic and theoretical insights to derive a practical reinforcement learning algorithm with function approximation is an important next step.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I think this paper is meaningful and am looking forward to your future work to mitigate its shortness. | Summary: The paper explores reinforcement learning applications where a pre-existing dataset of collected experience is available, and suggests the possibility of obtaining additional online data to enhance policy quality. To avoid the costs associated with switching policies, the authors propose utilizing a single non-reactive exploration policy for gathering supplementary data. They present an algorithm with provable guarantees that leverages an offline dataset to design such a policy. The algorithm is analyzed theoretically, and the authors evaluate the final policy's quality based on the local coverage of the original dataset and the amount of additional data collected. Overall, the research contributes to improving reinforcement learning by optimizing data acquisition and policy design.
Strengths: The advantages of this paper are as follows:
1. Although I haven't carefully derived each equation, the proofs in this paper are expected to be accurate, with complete steps and rigorous derivations.
2. This paper represents the pioneering effort in terms of theoretical rigor, addressing the challenge of designing an experiment in reinforcement learning for online passive exploration, leveraging a dataset comprising pre-collected experiences. The setting is novel and has some practical significance.
3. The proof method presented in this paper is innovative and the sample complexity mentioned in the conclusion is also tight. It will introduce new approaches to the theoretical research of RL. Additionally, the conclusions of this paper also reveal some interesting insights that will enrich the existing theoretical achievements in the offline-to-online field.
Weaknesses: 1. The basic assumption of this article is that the offline-trained policy cannot be switched during the online phase, and it allows for the collection of an unlimited number of samples using this policy. I have two concerns. Firstly, from a theoretical research perspective, this assumption narrows down the problem to a very specific setting, so even with rigorous mathematical proofs, the generalizability of the conclusions may be compromised. Secondly, from a practical application standpoint, if the offline-trained policy itself is poor but cannot be switched during the online phase, and a large number of online samples need to be collected using this policy, there will be even greater security issues. As a result, the advocated security considerations in this paper will no longer exist. I appreciate the mathematical methods used in this paper, but the lack of persuasive motivation will impact the significance of this paper.
2. The proof process in this paper is too lengthy and difficult to understand. Although the main text provides some introduction to the overall proof logic, there are too many specific terms involved in the proof without providing intuitive explanations for their generation, which increases the difficulty of understanding. I suggest that the author can provide a more concise version, even if the resulting sample complexity is not optimal, but it can be used to understand the overall proof framework.
3. Another limitation is that it currently only applies to smaller S and A. When S and A are larger, the application will become difficult and the sample complexity will be significant. So I'm curious to know if this approach can be extended to scenarios involving function approximation. If it is possible, what additional considerations or processing steps would be required? If it is not feasible, what challenges exist?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors didn't discuss the limitation of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions on our paper! We will try to answer your questions below.
1. Q: Firstly, from a theoretical research perspective, this assumption narrows down the problem to a very specific setting, so even with rigorous mathematical proofs, the generalizability of the conclusions may be compromised.
A: Thanks for your question! Our study on non-reactive exploration arise from the need for low deployment and switching costs in the real-world applications across various domains such as recommendation systems and healthcare. By necessity, we formalized such general research questions in a specific setting where we could obtain concrete results, but we believe that some of the insights and algorithmic design principles extend more broadly.
2. Q: Secondly, from a practical application standpoint, if the offline-trained policy itself is poor but cannot be switched during the online phase, and a large number of online samples need to be collected using this policy, there will be even greater security issues. As a result, the advocated security considerations in this paper will no longer exist. I appreciate the mathematical methods used in this paper, but the lack of persuasive motivation will impact the significance of this paper.
A: Although safety is an important concern in the application of reinforcement learning, it is largely orthogonal to the issues investigated in our paper. As the reviewer noticed, the safety of the online phase can partially depend on the pre-collected dataset, so if we want to take more safety factors into consideration, we can of course apply some additional techniques.
Generally speaking, there are two ways to incorporate safety. The first is to incorporate some techniques from safe RL [1,2,3] into the design of the algorithm, and also to the procedure of collecting the offline dataset. The second is before deployment: non-reactive exploration produces a single policy, and so safety can be checked quite easily before deployment.
3. Q: The proof process in this paper is too lengthy and difficult to understand. Although the main text provides some introduction to the overall proof logic, there are too many specific terms involved in the proof without providing intuitive explanations for their generation, which increases the difficulty of understanding. I suggest that the author can provide a more concise version, even if the resulting sample complexity is not optimal, but it can be used to understand the overall proof framework.
A: We understand that the proof can be long, but this is often a necessary compromise for the sake of rigor, and we will try our best to make it as clear as we can by adding some descriptions about the overall proof strategy.
4. Q: Another limitation is that it currently only applies to smaller S and A. When S and A are larger, the application will become difficult and the sample complexity will be significant. So I'm curious to know if this approach can be extended to scenarios involving function approximation. If it is possible, what additional considerations or processing steps would be required? If it is not feasible, what challenges exist?
A: Thanks for your question! As the first paper working on the non-reactive exploring policy in offline-to-online RL, we considered the tabular case.
It is an interesting next step to extend these insights to the function approximation setting. In the function approximation setting, one probably needs to overcome additional challenges, starting from defining of sparsified MDP in a way that takes into account function approximation. The method of adding positive bonus in the offline simulation phase can also be applied with slight change to the concrete form of bonus functions in the function approximation setting.
5. The authors didn't discuss the limitations of their work.
A: Thanks for your advice on the writing! We will definitely add a conclusion with emphasis on the significance as well as the limitations of our works.
To summarize, in this paper we leverage offline data to conduct exploration using non-reactive policies. The key contributions lie in the originality of the setup and in the mathematical work to describe the conditions that need to be met for sub-optimality guarantees. Algorithmically, this is achieved by a novel blending of the principle of optimism and pessimism to design the exploration policy in a way that is probably efficient. Extending these algorithmic and theoretical insights to derive a practical reinforcement learning algorithm with function approximation is an important next step.
[1]. Gu et al. A review of safe reinforcement learning: Methods, theory and applications.
[2]. Ding et al. Provably efficient safe exploration via primal-dual policy optimization.
[3]. Cheng. End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response and I have raised my score to 7. I would like to see the proofs with more clear description in the final paper. Wishing you all the best with your publication. | Summary: This paper proposes an algorithm for policy fine-tuning in reinforcement learning using a dataset of pre-collected experience. The algorithm leverages the dataset to design a non-reactive exploratory policy and outputs a locally near-optimal policy. The paper makes theoretical contributions in analyzing the quality of the policy and establishing sample complexity bounds. The work is motivated by the practical need for non-reactive exploration in domains where switching policies is costly and impractical. The paper presents innovative ideas and provides a novel solution to the problem.
Strengths: - The paper introduces an algorithm that addresses the problem of non-reactive policy design in reinforcement learning and provides provable guarantees for the quality of the resulting policy.
- The concept of sparsified MDP is introduced and effectively used in the algorithm and theoretical analysis.
- The paper rigorously establishes a nearly minimax-optimal upper bound for the sample complexity needed to learn a local ε-optimal policy using the proposed algorithm.
- The paper addresses a practical need for non-reactive exploration in domains where policy switches are costly and provides a solution that can be valuable in such scenarios.
Weaknesses: - The paper only considers discrete state and action spaces.
- The paper provides no empirical evaluations or demonstrations of the proposed algorithm. Neither does it shed light on the design of practical algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can you give more details on the non-reactive property of a policy? It seems that most RL policies in an MDP will be non-reactive, as long as they take only the current state $s_t$ as input. How is the example in line 30-32 related to the non-reactive property?
- Online exploration may lead to safety violations. How safe will the online phase be in the original MDP?
- It seems contradictory to be both optimistic and pessimistic at the same time. How to determine the region that the agent knows how to navigate?
- Can you provide some empirical evaluations of the proposed algorithm, compared with fully offline or fully online algorithms?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions on our paper! We will try to answer your questions below.
1. Q: Can you give more details on the non-reactive property of a policy? It seems that most RL policies in an MDP will be non-reactive, as long as they take only the current state s_t as input. How is the example in line 30-32 related to the non-reactive property?
A: Thank you for your question and we will try to make it clearer in the next version. **A non-reactive policy is a policy that is not updated during the whole interaction process.**
As the reviewer notices, RL algorithms typically employ a sequence of non-reactive policies as they explore the environment. **These algorithms are reactive to the data they acquire, because such data is used to update the currently deployed policy.** The sequence of non-reactive policies that they deploy could be interpreted as a **single reactive** policy, namely one that changes during the interaction.
In contrast our work examines the setting where exploration must be done with a **single non-reactive** policy: in our case, no updates are possible during the entire exploration phase.
While deploying a sequence of policies is the better approach from a theoretical point of view, **doing so is not always practically feasible**. In line 30-32 we give an example with a human in the loop. The human may need significant time to validate each policy that a reactive algorithm produces, and this may make real-time deployment of a reactive algorithm impractical. However, such a problem disappears if the algorithm only deploys a static policy for exploration, because this policy only needs to be checked once before deployment.
2. Q: Online exploration may lead to safety violations. How safe will the online phase be in the original MDP?
A: Although safety is an important concern in the application of reinforcement learning, it is not the main motivation of our paper. Nonetheless, there are two ways to incorporate safety.
The first is to incorporate some techniques from safe RL [4,5,6] into the design of the algorithm, and also to the procedure of collecting the offline dataset. The second is before deployment: non-reactive exploration produces a single policy, and so safety can be checked quite easily before deployment.
3. Q: It seems contradictory to be both optimistic and pessimistic at the same time. How to determine the region that the agent knows how to navigate?
A: One of our contributions is that our algorithm combines the principle of optimism and pessimism in a subtle way: they are applied to different regions of the MDP and so there is no contradiction between these two principles. We will clarify this in the introduction.
At a high level, pessimism excludes the region where we have little to no knowledge about the transition dynamics, and it identifies a sub-MDP where approximate planning is possible. Within the sub-MDP, we explore using the principle of optimism. In summary, pessimism defines the MDP sub-region where we can leverage optimism to conduct exploration.
4. Q: Can you provide some empirical evaluations of the proposed algorithm, compared with fully offline or fully online algorithms?
A: We thank reviewers for the suggestion that the paper could be made stronger with the addition of numerical experiments. Although we agree with the reviewers’ suggestions, a practically useful RL algorithm for this setting would need to leverage function approximation such as neural networks. Designing an effective algorithm for such a setting requires making several critical design choices that are specific to the function approximation setting, and overcoming the challenges that likely arise. This is beyond the scope of the paper, which is a first step towards understanding how offline data can be used for non-reactive exploration. We will highlight in the conclusion of the paper that the creation of a practical algorithm with function approximation is an important future direction.
5. Q: The paper only considers discrete state and action spaces.
Indeed our paper is a first step for exploration with a non-reactive policy which is computed with the help of offline data.
Although we could have tackled the setting of function approximation, doing so would have involved a number of technical considerations that might have made the result more cluttered and less clear for a first paper on such topics. We do agree with the reviewer that the extensions to the function approximation setting are important future directions, and we will mention this in the conclusion.
[1]. Jin et al. Is q-learning provably effi394 cient?
[2]. Kaufmann et al. Adaptive reward-free exploration.
[3]. Ménard et al. Fast active learning for pure exploration in reinforcement learning.
[4]. Gu et al. A review of safe reinforcement learning: Methods, theory and applications.
[5]. Ding et al. Provably efficient safe exploration via primal-dual policy optimization.
[6]. Cheng et al. End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I acknowledge the authors' rebuttal and I remain my rating towards acceptance. | Summary: The paper proposes an algorithm that, given a previously collected dataset of transitions from an MDP, produces a non-reactive policy that can effectively collect additional data that enables a near-optimal policy to be obtained for any possible reward function. The algorithm is model-based and combines elements of optimism (exploration bonuses) and pessimism (early termination at OOD states/actions). Suboptimality is bounded relative to the optimal policy that stays within the “sparsified MDP” which is the subset of the original MDP that is sufficiently covered by the data.
Strengths: * The paper presents positive, novel (to my knowledge) theoretical results in a well-motivated setting of practical interest.
* The interaction protocol is new AFAIK, but closely related to existing areas such as reward-free RL. It may inspire more work in the future.
* The paper is clear and not hard to follow, despite its technicality.
Weaknesses: * No experiments, despite having “Design of Experiments” in the title :) (But this is a theory paper so I think it is okay)
* IMO the Related Work should cite the MOReL paper [1], which uses a pessimistic MDP construction similar to your sparsified MDP, in which low-density states/actions lead to a special absorbing state. Of course, they are tackling a different setting (offline RL with a particular reward function) and use a somewhat different termination criterion, but the idea is the same.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I’m curious if it is known that better sample complexity can be obtained if you allow the policy to be adaptive? (Although I understand the engineering-related reasons for not doing so.) If so, it could be useful context to briefly comment on in the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No, limitations are not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions on our paper! We will try to answer your questions below.
1. Q: I’m curious if it is known that better sample complexity can be obtained if you allow the policy to be adaptive? (Although I understand the engineering-related reasons for not doing so.) If so, it could be useful context to briefly comment on in the paper.
A: This is a very good question and we will stress this point in the next version of our paper. Yes, one can do better if you use reactive exploration policy. By using reactive policies, you can achieve eps-optimal policy on the entire MDP with O(H^2 S^2 A / \eps^2) trajectories[1,2], while the non-reactive exploring policies can only yield an eps-optimal policy on the sparsified MDP. This difference is natural since we can only effectively explore the region where we have some knowledge about (i.e., the sparsified MDP).
2. Q: No experiments, despite having “Design of Experiments” in the title :) (But this is a theory paper so I think it is okay)
A: We thank reviewers for the suggestion that the paper could be made stronger with the addition of numerical experiments. Although we agree with the reviewers’ suggestions, a practically useful RL algorithm for this setting would need to leverage function approximation such as neural networks. Designing an effective algorithm for such a setting requires making several critical design choices that are specific to the function approximation setting, and overcoming the challenges that likely arise. This is beyond the scope of the paper, which is a first step towards understanding how offline data can be used for non-reactive exploration. We will highlight in the conclusion of the paper that the creation of a practical algorithm with function approximation is an important future direction.
3. IMO the Related Work should cite the MOReL paper, which uses a pessimistic MDP construction similar to your sparsified MDP, in which low-density states/actions lead to a special absorbing state. Of course, they are tackling a different setting (offline RL with a particular reward function) and use a somewhat different termination criterion, but the idea is the same.
A: We thank the reviewer for pointing out a connection with MOReL; the paper is indeed relevant and we will cite it. As the reviewer notices, MOReL also constructs a sub-MDP using the principle of pessimism; a key algorithmic difference is that we combine such pessimistic construction with the principle of optimism to explore within the sub-MDP instead of directly using the dataset to output a policy.
[1]. Ménard et al. Fast active learning for pure exploration in reinforcement learning.
[2]. Jin et al. Reward-free exploration for reinforcement learning.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers! I would say that my concerns are addressed and my score stands as-is. While experiments would of course strengthen the paper further (as the authors agree), I think the theoretical results are already a useful contribution. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The work proposes a method to create a non-reactive exploratory policy from an initial input dataset. Then, leveraging the new data, the algorithm generates a locally near-optimal policy.
The relevance of the algorithm is in the low-switching algorithms, where it is assumed there is a cost to changing a deployed policy.
The proposed algorithm uses the initial data to build a sparsified MDP, an approximation of the original MDP that keeps only the transitions (s, a, s') for which there are at least Φ transitions.
The next part of the algorithm is building the exploratory policy using the sparsified MPD using a value iteration iterative strategy with an exploration bonus instead of actual rewards. The exploration bonus balances optimism and pessimism, boosting exploration towards less explored states while avoiding spending too much time in unknown parts of the MDP.
Finally, additional data is collected from the environment using this exploring policy, and a value iteration algorithm is employed on the combined datasets of experiences to build the final policy.
The authors provide optimality bounds, describing the conditions for the algorithm to discover an epsilon-suboptimal policy.
Strengths: The strengths of this paper lie in the originality of the setup and in the mathematical work to describe the conditions that need to be met for sub-optimality guarantees.
Weaknesses: The weakness of the paper is the need for empirical. For example, it would be helpful to see how well the algorithm performs given initial data sets of different sizes and coverage.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Could some experiments be added to see the performance of the algorithm in practice, maybe compared to other offline RL methods?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions on the paper! We will try to answer your questions below.
1. Q: The weakness of the paper is the need for empirical. For example, it would be helpful to see how well the algorithm performs given initial data sets of different sizes and coverage.
A: We thank reviewers for the suggestion that the paper could be made stronger with the addition of numerical experiments. Although we agree with the reviewers’ suggestions, a practically useful RL algorithm for this setting would need to leverage function approximation such as neural networks. Designing an effective algorithm for such a setting requires making several critical design choices that are specific to the function approximation setting, and overcoming the challenges that likely arise. This is beyond the scope of the paper, which is a first step towards understanding how offline data can be used for non-reactive exploration. We will highlight in the conclusion of the paper that the creation of a practical algorithm with function approximation is an important future direction. | Summary: The paper considers the setting where it is possible to leverage a dataset of transitions, together with the possibility of deploying a policy to collect additional information. The question then lies into what kind of policy should be deployed and what kind of data should be gathered. The authors argue that deploying an exploratory policy that switches, e.g. a policy that learns and adapts from its experience, leads a great engineering costs. As such, they propose to follow the principle of pessimism together with a non reactive policy that would be constrained to a sub-region of the MDP with enough transitions. They also make use of the principle of optimism to derive the exploratory actions taken within the subMDP. The authors provide a near optimal minimax-optimal upper bound for learning an epsilon-optimal policy.
Strengths: The contributions and assumptions are made clear in the introduction (however, the concept of reactive policy is only clearly explained in plain a words a bit late). An intuition section is also provided which helps the reader follow the paper.
The authors provide theoretical guarantees both for the sparsified MDP together with the full MDP. In particular for the results on the full MDP it seems like a reduction in the epsilon coefficient is an important contribution.
The paper emphasizes a hybrid approach, exploring a setting that combines elements of both offline and online methods, hence offering a more practical and adaptable framework for various real-world applications.
Weaknesses: Although the setting of interest is important (mix of online and offline) the paper doesn’t stress enough the importance of a non-reactive policy. Why is learning from the generated experience such a bad idea in practice? It seems like a slowly changing deployed policy (where changes perhaps happen through a trust region) would be a better choice.
I understand the paper is essentially a theoretical one, however it would be interesting to present some empirical evidence as to the practicability of the proposed algorithm. Indeed, it is not clear if the current dependencies on state, action, epsilon and such quantities in the bounds would provide a meaningful difference.
The paper misses an opportunity to provide a comprehensive conclusion, one that effectively synthesizes the results, giving the reader a clear understanding of the study's overall implications and potential future directions.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: How good of a policy (trained from scratch) could be obtained with the number of additional samples required in Corollary 5.2?
Although the dependence on the desired accuracy is reduced in Corollary 5.2, how much of a difference can this make given the concentrability coefficient?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are not clearly stated throughout the work. Some of the questions above try to probe into this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions on our paper! We will try to answer your questions below.
1. Q: Why is learning from the generated experience such a bad idea in practice? It seems like a slowly changing deployed policy (where changes perhaps happen through a trust region) would be a better choice.
A: We agree with the reviewer that a slowly changing policy would be the preferred choice (and the approach that we would recommend) if it can be implemented easily.
However, an interactive system that adaptively changes interventions in response to outcomes may be impractical in some settings due to the expertise and infrastructure needed.
An example is (1) when multiple agents collect data asynchronously and real-time communication to update their policy is difficult (such as in a large organization that is selling ads) and (2) whenever there is a significant overhead in the engineering infrastructure to implement an adaptive policy (such as in an organization with a complex production code).
Running an experiment with a fixed decision policy is much simpler logistically, (since such organizations may already design decision policies by hand) and appealing, since many areas (education, healthcare, social sciences) commonly deploy experiments to find the best approach.
In these settings there is a key opportunity to design a sample efficient non-reactive policy that can be used to gather additional data to later identify better decision rules.
Although some of these considerations are mentioned in Lines 30-40, we will expand them with additional discussion.
2. Q: It would be interesting to present some empirical evidence as to the practicability of the proposed algorithm. Indeed, it is not clear if the current dependencies on state, action, epsilon and such quantities in the bounds would provide a meaningful difference.
A: We thank reviewers for the suggestion that the paper could be made stronger with the addition of numerical experiments.
Although we agree with the reviewers’ suggestions, a practically useful RL algorithm for this setting would need to leverage function approximation such as neural networks. Designing an effective algorithm for such a setting requires making several critical design choices that are specific to the function approximation setting, and overcoming the challenges that likely arise. This is beyond the scope of the paper, which is a first step towards understanding how offline data can be used for non-reactive exploration. We will highlight in the conclusion of the paper that the creation of a practical algorithm with function approximation is an important future direction.
**We improved the sample complexity compared to a purely offline approach.** The best tabular offline algorithm requires O(H^3 S^2 C^* / \eps^2 + H^{5.5} S^2 C^* / \eps) samples to find an eps-optimal policy [4] (when ‘translated’ to the time-homogeneous and reward-free setting). Our sample complexity is O(H^3 S^2 A / \eps^2 + H^4 S^2 A C^* / \eps). We shave off the concentrability coefficient C^* in the leading term and also shave off H^1.5 in the O(1/\eps) term. Since the concentrability coefficient can be extremely large, our approach offers a significant improvement over purely offline algorithms.
More precisely, the current dependencies on the S,A,H and \eps are minimax optimal up to the log factors in the reward-free setting [3]. This means that in the worst case one can not obtain an eps-optimal policy using less samples than our bounds.
3. Q: How good of a policy (trained from scratch) could be obtained with the number of additional samples required in Corollary 5.2?
A: Generally it is not possible to obtain meaningful guarantees when training *from scratch* with a single non-reactive policy. In fact, in the absence of any information about the MDP the best one can do is just to deploy a uniformly random policy, which commands an exponential O(exp(H)) sample complexity in hard-to-explore settings [1]. When exploring from scratch it is essential to use *reactive* policies; a well designed algorithm can find \eps-optimal policies with O(H^3 S^2 A / \eps^2) samples in the reward-free setting [3].
4. Q: Although the dependence on the desired accuracy is reduced in Corollary 5.2, how much of a difference can this make given the concentrability coefficient?
A: In practice, this concentrability coefficient is unknown and can be large. Since we remove C^* from the main term (the one with O(1/\eps^2) dependence), the sample complexity now only depends on the concentrability C^* through a lower-order term ( the one with O(1/\eps) dependence ). That is, the smaller the target accuracy \eps, the smaller the effect of a large concentrability C^*.
5. Q: The paper misses an opportunity to provide a comprehensive conclusion, and the limitations are not clearly stated throughout the work.
A: Thanks for your suggestion and we will add an additional conclusion section in the future version.
To summarize, in this paper we leverage offline data to conduct exploration using non-reactive policies. The key contributions lie in the originality of the setup and in the mathematical work to describe the conditions that need to be met for sub-optimality guarantees. Algorithmically, this is achieved by a novel blending of the principle of optimism and pessimism to design the exploration policy in a way that is probably efficient. Extending these algorithmic and theoretical insights to derive a practical reinforcement learning algorithm with function approximation is an important next step.
[1]. Xiao et al, The curse of passive data collection in batch reinforcement learning.
[2]. Qiao et al. Sample-efficient reinforcement learning with loglog (t) switching cost.
[3]. Jin et al. Reward-free exploration for reinforcement learning.
[4]. Xie et al. Policy finetuning: Bridging sample-efficient offline and online reinforcement learning.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
thank you for the very detailed rebuttal, it certainly helps in understanding the paper and its results.
It is true that a practically use deep RL algorithm would require neural networks, however this does necessarily mean that neural networks are required for an illustrative experiment to be added to the paper. This would complement well the application example provided by the authors where complex production code could inhibit a reactive policy: one would imagine that in such an example the non-reactive policy would be a simpler and more predictable model (e.g. linear function approximation). Is it likely that finding a minimal example of where the proposed strategy makes a difference in practice (even in the tabular setting)? If this is the case, it would be important to at least state it and encourage more research on the matter.
I appreciate the authors clearly stating the connections to previous works and bounds, it is a more convenient way to assess the contributions of the paper. It is also good to know that the authors will rework some of the presentation to include these together with a conclusion. For these reasons I am raising my score. | null | null | null | null |
Bootstrapped Training of Score-Conditioned Generator for Offline Design of Biological Sequences | Accept (poster) | Summary: This paper introduces a novel algorithm, the Bootstrapped Training of Score-Conditioned Generator (BOOTGEN), to optimize the design of biological sequences. BOOTGEN overcomes the challenge of high-cost evaluations and vast search space by training a score-based generator using rank-based weights and a bootstrapping process. This generator is then augmented with self-generated data labeled by a proxy score function. The process results in diverse and accurate biological sequence designs. The efficacy of BOOTGEN is demonstrated via extensive experiments across six offline biological design tasks.
Strengths: The paper is clear and well-presented. The authors provide a comprehensive introduction and explanation of the BOOTGEN algorithm, making it easily understandable even for those who may not be specialists in this specific field. The experimental results are also clearly laid out and explained, adding to the overall lucidity of the paper.
Furthermore, the extensive experiments and superior results over existing methods reinforce the potential value of this work to both academic researchers and industry practitioners.
Weaknesses: 1. The authors have overlooked a crucial piece of literature, "Deep Extrapolation for Attribute-Enhanced Generation" (https://arxiv.org/abs/2107.02968), presented at NeurIPS 2021. This paper also explores protein sequence design and employs a generator/scorer framework. The authors need to acknowledge this work and conduct comparative analysis with it.
2. The first word of the title "Automatic design of biological sequences" should be capitalized. This correction should also be made in Section 4.2, where the title reads "Varying the evaluation budget."
3. Claiming the ranking-based weighting as a significant contribution over the value-based weighting is not entirely novel and may not warrant its inclusion as a key contribution.
4. The paper's differentiation from "Conditioning by adaptive sampling for robust design," which also uses a generative model and a proxy for labeling, is unclear.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for providing a valuable review.
**W1: About the paper "Deep Extrapolation for Attribute-Enhance Generation"**
Thank you for pointing out the relevant literature. GENhance [1] and our method share the common goal of extrapolating from offline datasets using generative models. BootGen is a high-level algorithm that leverages existing (conditional) generative models. In contrast, GENhance is a novel generative model with significant neural architecture and learning loss. Therefore those works are orthogonal. The combination of GENhance and BootGen seems complementary, presenting a promising direction for future work. We will dedicate a discussion section further to analyze this integration's potential benefits and implications.
---
**W2: About typo**
Thank you for pointing out the typo; we will revise it.
---
**W3: Contribution claims of rank-based weighting**
Thank you for your understanding. While we acknowledge the differences from applying rank-based weighting to a conditional generator, we agree that this aspect alone may not be the essential contribution. Our principal gifts lie in the novel combination of techniques and the high-level algorithmic structure, which facilitates stable extrapolation for offline biological sequence optimization. We highly value your feedback and will emphasize these core contributions in our paper.
---
**W4: Unclear differentiation from "Conditioning by adaptive sampling for robust design,**
Our algorithm addresses the challenge of offline black-box optimization through a combination of reweighting processes and extrapolation of queries for score-conditioned models. The objective is to generate novel samples beyond the scope of the offline dataset without relying on an oracle function. To achieve this, we employ an aggregation method to ensure the stability of the process.
The CbAS [2] method is a representative black-box optimization technique that excels in generating candidate samples to maximize the oracle function in an online setting. However, when applied to offline scenarios, it falls short in performance. Unlike CbAS, which relies on continuous updates from online Oracle queries, our approach leverages reweighting and extrapolation processes to excel in offline settings. Tables 1 and 2 in the main text demonstrate our superior performance across all tasks compared to CbAS.
---
### References
[1] Chan, Alvin, et al. "Deep extrapolation for attribute-enhanced generation." Advances in Neural Information Processing Systems 34 (2021): 14084-14096.
[2] Brookes, David, Hahnbeom Park, and Jennifer Listgarten. "Conditioning by adaptive sampling for robust design." International conference on machine learning. PMLR, 2019.
---
Rebuttal Comment 1.1:
Title: Feedback
Comment: I still think that CbAS is similar to your work. CbAS can also be applied in this offline scenario. It also leverages a proxy to learn a better generator to extrapolate.
---
Reply to Comment 1.1.1:
Comment: Thank you for fostering an insightful discussion.
As you aptly pointed out, there is a common aim between CbAS and BootGen. Both methods share core principles, including proxy utilization, online training through proxy and generator interplay, and the incorporation of weighted Maximum Likelihood Estimation (MLE) training.
To elucidate the distinctions between CbAS and BootGen:
| | CbAS | BootGen |
| -------- | -------- | -------- |
| Underlying Distribution | $q(x)$ | $p(xㅣy)$ |
| Generative Framework | Weighted VAE | Conditional Auto-regressive Model |
| Ensemble | Proxy Model (ensemble for y) | Generative Model (ensemble for x) |
| Weighting Approach | Conditional probability to enforce desired value set S given x | Assessment through dataset ranking |
CbAS trains estimate conditional distribution $p(x|S)$ by introducing variational distribution $q(x;\phi)$, which is trained to minimize $D_{KL}(p(x|S)||q(x)) = E_{p(x)}[P(S|x)logq(x)]$ where $S$ stands for a set of desired property value. This can be seen as weighted MLE where $P(S|x)$ stands for weight; i.e., high probability for desired property value gives high weight. The weight $P(S|x)$ is estimated using the Oracle score function (or proxy function can be used in the offline setting, as you mentioned). After training $q(x;\phi)$, desired valued designs are sampled from the variational distribution: $x \sim q(x;\phi)$.
In contrast, BootGen directly trains **conditional** distribution $p(x|y;\phi)$ using weighted MLE (rank-based weight) and periodically augment offline dataset using a training generator and proxy model. The design is sampled from conditional distribution by querying desired value in the inference time: $x \sim p(x|y=y^*;\phi)$. The ensemble process is done for aggregating multiple generated from the parallel BootGen process.
I would like to clarify that I have examined both the paper [1] as well as the source code of CbAS, accessible at this link: https://github.com/brandontrabucco/design-baselines/tree/master/design_baselines/cbas, about the realm of offline model-based optimization.
---
The performance disparity between BootGen and CbAS is elucidated in the results showcased in Table 1 of the main text.
| Method | RNA-A | RNA-B | RNA-C | TFBind8 | GFP | UTR | Avg. |
|------------|----------------|----------------|----------------|-----------------|-----------------|-----------------|--------|
| CbAS | 0.541 ± 0.042 | 0.647 ± 0.057 | 0.644 ± 0.071 | 0.913 ± 0.025 | **0.865 ± 0.004** | 0.692 ± 0.008 | 0.717 |
| BootGen | **0.902 ± 0.039** | **0.931 ± 0.055** | **0.831 ± 0.044** | **0.979 ± 0.001** | **0.865 ± 0.000** | **0.865 ± 0.000** | **0.895**|
Note the following table presents the experimental results on 100th percentile scores. The mean and standard deviation are reported for 8 independent solution generations. The best-scored value is marked in bold.
[1] Brookes, David, Hahnbeom Park, and Jennifer Listgarten. "Conditioning by adaptive sampling for robust design." International conference on machine learning. PMLR, 2019. | Summary: This paper proposes to solve the problem of generating novel objects (in this case biological sequences) by learning weighted MLE models, and augmenting the training set of those MLE models with virtual data whose score is based on extrapolations from a proxy.
This method is tested on standard biological sequence generation problems and found to yield good scores and diversity.
Strengths: I think the main strength of this paper is it confirms that extrapolation through MBO-style proxy+generative model that's recently gained popularity still can be improved and explored in a variety of ways.
The paper is well written and it was easy for me to understand the method.
Weaknesses: I think the main weakness of this work is that it feels like an aggregation of methods that work well together, rather than a strong single contribution that deeply improves our understanding of MBO.
This is seen fairly well in e.g. Tables 4.3 and 4.4, there's an accumulation of things that improve performance; while it is commendable to improve the performance of ML methods, it's not clear exactly what we've learned from this paper. If there's one way I think the authors could improve this paper it's by either showing or arguing that there is a central contribution, a nugget of knowledge gained by doing these experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I don't have too many questions, but I have a minor concern about the fairness of comparisons; I wonder if all baselines see the same amount of data and receive roughly equivalent amounts of compute. In particular, the BootGen experiments seem to leverage an ensemble of generators ("we gather cross-aggregated samples from multiple score-conditioned generators"). Many of the baselines could also be trivially improved via ensembling; is this comparison done?
Small comment: I'd suggest using shapes (in addition of colors) in Fig 4.2
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for providing a constructive review.
**
**What can we learn from this paper?**
Focusing on fundamental and rigorous approaches is more important in offline design optimization tasks than relying on fancy techniques.
Offline design optimization is inherently challenging as it prohibits access to the Oracle score function during training. As a result, achieving statistically stable and powerful extrapolation over the offline dataset becomes crucial. The main contribution of this study is the reinterpretation/transformation of traditional statistical concepts, such as ensemble and bootstrapping, into deep learning forms for utilization in offline design optimization. The ensemble technique can be employed as an exploration strategy to explore various modes effectively. Bootstrapping demonstrated its effectiveness as an exploitation strategy to reach high-score regions with limited samples. Particularly, through various experiments, it has been proven that the combination of these two simple yet robust concepts can serve as a powerful offline design optimization technique.
1. **Rank-based weighting**: Make a generative model focusing on the high-score regions on the offline dataset
2. **Score conditioned generator**: Make a conditional generative model to learn to map score to a sequence which learns the relationship between score and sequence and can extrapolate to high score region by leveraging the relationship between low-scored data of the offline dataset
3. **Bootstrapping using 1 and 2**: Augment training dataset using 1,2 and proxy score function to make the generative model more confident by distillation knowledge of proxy and generator itself.
4. **Diverse aggregation**: stabilize possible risk from 3 and leverage multiple diversified generators (trained on possible different bootstrapping scenarios) and diversified sampling.
Our algorithm structure seamlessly integrates and can be viewed as a deep learning-based renovation of the classical ensemble approach involving bootstrapping and aggregation.
In conclusion, our paper demonstrates the importance of carefully approaching and revisiting simple classical schemes within the context of deep learning tasks. This approach proves to be more powerful than relying on fancy techniques, as evidenced by our experiments' clear and compelling results.
---
**Answers to the questions**
Our experimental setting ensures fairness in two key aspects. Firstly, all baselines use the same offline dataset, and none of them have access to the Oracle score function, ensuring a level playing field for comparison. Secondly, the computation time is similar for every task, with almost all tasks completed within an hour (our method is notably fast).
Compared with the most recent offline model-based optimization work, BDI [1], BootGen demonstrates faster speed, as shown in the table below.
| | Training time|
| -------- | -------- |
| BDI | 1h |
| Ours (1 gen) | 3min |
| Ours (8 gen) | 24min |
The speed of BootGen is measured per one generator, and we utilize 8 generators in parallel. If we were to measure the speed as serial computation (worst case), the total time taken would be approximately $3 \times 8 = 24$ minutes.
It's important to note that computation time is not the main focus in offline black-box optimization tasks since the black-box score function becomes the bottleneck for computation. The primary concern lies in efficiently handling the objective evaluation process, which often requires significant time and resources.
---
### Response to the review of "Many of the baselines could also be trivially improved via ensembling."
Our method introduces novelty through the ensemble process, which involves creating ensembles from multiple bootstrapping generators. While ensemble techniques are widely used in this field, our approach demonstrates that the direct application of ensembles does not trivially improve performance, as observed in other baselines [2].
| | UTR (100th percentile)|
| -------- | -------- |
| Grad. [2] | 0.695 $\pm$ 0.013 |
| Grad. (Mean ensemble) [2] | 0.696 $\pm$ 0.009 |
| Grad. (Min ensemble) [2] | 0.693 $\pm$ 0.010 |
| BootGen (w/o DA ensemble) | 0.729 $\pm$ 0.074 |
| BootGen (w DA ensemble) | 0.858 $\pm$ 0.003 |
Please note that "Grad." refers to the gradient ascent method mentioned in [2]. The "Mean ensemble" represents an ensemble of proxy models that select the score value based on the mean value among proxies, while the "Min ensemble" selects the score value from the minimum value among proxies. The experimental results of the "Grad." method can be found in Table 5 of [2].
---
Thank you for pointing out the typo; we have now corrected it in the main paper.
---
### References
[1] Chen, Can, et al. "Bidirectional learning for offline infinite-width model-based optimization." Advances in Neural Information Processing Systems 35 (2022): 29454-29467.
[2] Trabucco, Brandon, et al. "Design-bench: Benchmarks for data-driven offline model-based optimization." International Conference on Machine Learning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the precisions and extra data points. I think your contribution is good, perhaps ~worrying~ writing could be improved somewhat to really distill these points (but I realise this is easier said than done).
I will raise my score from 6 to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for elevating the scores, providing invaluable feedback, and supporting the acceptance of our papers. | Summary: The paper proposes BootGen, a model-based sequence optimization algorithm, the author apply to the task of biological sequence design. BootGen has two stages where the first stage trains multiple sequence generators to give higher probability to sequences predicted to have higher scores from a proxy score model. The second stage uses bootstrapping by way of the trained generators to generate more data and train new generators. Samples are then aggregated in a way to encourage diversity and high fitness. BootGen is compared against previous sequence design methods on six biological sequence tasks on which BootGen achieves state-of-the-art results. Ablations show each component of the method -- bootstrapping, filtering, and rank-based reweighting -- to help performance.
Strengths: - The principle behind BootGen is simple: use bootstrapping to distill signal from multiple models into one. The consensus between all the models will reduce variance and improve performance. Seeing that BootGen can outperform arguably more complicated methods is good to see.
- The combination of steps -- ranking, bootstrapping, filtering, and diversity aggregation -- are logical and synergistic. It's not a surprise they help each other.
Weaknesses: - **The novelty is overstated**. Bootstrapping is a standard statistics idea, score-wise rank weights was proposed in prior work. The idea of filtering and diversity aggregation are not novel either. Adalead for instance will use the proxy model to select its final sequences. Aggregation is simply taking the union over all the samples. I don't believe there is technical novelty other than bringing together existing ideas in a straightforward way. A bioinformatics journal seems more appropriate for this work.
- There seems to be a large jump in performance by using the proxy model to filter the sequences. In fact, the performance of the method seems extremely dependent on the accuracy of the proxy model. It is not stated that the same proxy model was used across all methods in the experiments. **This leads to a unfair comparison**. I believe a fair comparison would require using the same proxy model to isolate the methodological cotributions put forth in BootGen.
- The amount of bootstrapping seems very high. The appendix states 1280 samples are needed from each generator. The number of generators is not specified clearly. The exact number of model evaluations is unclear and this would have been important to analyze. Otherwise it is difficult to tell if the method is working well due to lots of compute over other methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I am confused by Algorithm 2 line 2. It says N generators are trained but N is the also the number of training examples. Is this saying the number of generators trained is the same as the number of examples?
- Can the authors put the main hyperparameters in the main text? It is important to know what N, I, M are to be able to put in context how expensive the method is next to the results.
- Appendix A.3 was hard to read. "Bootstrapping is applied with 2, 500 in additional steps" what does this mean? It is important to put variables to these numbers and explain where in the method they are used.
- As stated above, was the same proxy model used across all proxy-based baselines? i.e. GFN?
- How are diverse are the generators? I would suspect they all generate similar sequences especialyl given the small dataset. Was this not a problem?
- Can the authors put some representative sequences in the appendix? It is surprising to me the method can get such high fitness in UTR with high novelty, i.e. almost half the sequence changing. Are you sure it's not adversarial examples that are being generated?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors mentioned the limitation of a poor proxy that could "backfire" and cause BootGen to perform worse. However, this is a general limitation for all proxy based methods. I believe there is a bigger limitation with regards to the run-time of the method with needing lots of samples and lots of training.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive review and feedback. We provide responses for addressing the concerns below.
### Response for Novelty
We made a novel combination of well-known techniques. Our novel high-level algorithm involves (a) propelling the generator to discover novel data points beyond the offline dataset by distilling high-likelihood high-score regions with self-confidence and (b) aggregating samples from multiple self-confident generators to minimize potential risks from adversarial samples. This algorithmic strategy consistently outperforms every baseline across all six benchmarks, highlighting the diversity and novelty of the generated samples.
It would be greatly appreciated if you could note that numerous machine learning studies continue to explore innovative amalgamations of established techniques, as exemplified by the seminar paper "Rainbow: Combining Improvements in Deep Reinforcement Learning" [4]. In a similar vein, prior works like GFN-AL [1] ("Biological Sequence Design with GFlowNets") integrate the GFN [5] framework with the concept of active learning (AL) to address biological sequence design challenges. It's important to highlight that this choice could be attributed to several factors while published in ICML rather than a dedicated bioinformatics journal. Firstly, the intricacies of biological sequence design pose a significant challenge for the machine learning community. Secondly, leveraging existing ML techniques for these intricate applications offers a platform for validation and testing. Lastly, the multitude of insights and experimental outcomes from these endeavors can serve as inspiration for the wider ML community.
---
### Unfair Comparison for Proxy Usage
Our proxy model follows the same structure as a recent prior work [1], a simple MLP regressor trained on the offline training dataset.
---
### Training time for the BootGen
BootGen exhibits impressive training efficiency. Our comprehensive process capitalizes on parallel execution through eight generators, enabling the concurrent generation of 1280 samples in under a second. **Consequently, the impact of bootstrapping on training time remains minimal**. A comparative analysis against the latest offline model-based optimization work, BDI, highlights BootGen's superior speed, as demonstrated in the table below.
| | Training time|
| -------- | -------- |
| BDI [2] | 1h |
| Ours (1 gen) | 3min |
| Ours (8 gen serially) | 24min |
This experiment is done with a single Nvidie A100 GPU. BootGen's training speed is measured per generator. When utilizing eight generators in series, the total training time amounts to approximately 24 minutes. However, this training time can be significantly reduced by adopting parallel generator training.
It's worth noting that training time is not the primary focus of offline design optimization methods. In real-world scenarios, the objective evaluation, such as testing protein expressivity or binding activity, often becomes the bottleneck, taking days or even months.
---
### Answer for each question
**A1.** Thank you for the clarification. We will avoid using the letter "N" to prevent confusion and explicitly state that we leverage 8 generators.
**A2.** We appreciate the suggestion. We will include the hyperparameters as a table in the main text for better accessibility; see the attached PDF.
**A3.** We acknowledge the oversight and apologize for any confusion. The bootstrapping process involves two samples with 500 iterations each, resulting in a total of 1000 bootstrapped samples. We will revise the explanation and include it in the main text.
**A4.** Understood, and thank you for clarifying. We use the same proxy model as GFN-AL [1] to ensure a fair comparison. While other baselines may have slightly different learning methods for their proxy models, the core architecture remains consistent with a 2-layer MLP with a 2048 width.
**A5.** You are correct. Each generator focuses on specific regions, which may lead to limited diversity compared to a highly diversified generator like GFN. However, the parallel bootstrapping process ensures that each generator concentrates on different regions, contributing to enhanced diversity. When aggregating samples from multiple generators, we can achieve better diversity than GFN-AL [1].
**A6.** Thank you for the suggestion, and we look forward to the representative sequences in the appendix. Regarding adversarial examples in the context of offline design optimization, the definition involves two conditions: (1) the proxy model giving a high score, and (2) the oracle score function providing a low score. In the case of the offline model-based optimization (MBO) benchmark [3], where we assume a given oracle score function (pretrained ResNET for the 280,000 UTR dataset), our situation does not meet the criteria for adversarial examples. Both the proxy function and the oracle function provide high scores.
---
### References
[1] Jain, Moksh, et al. "Biological sequence design with gflownets." International Conference on Machine Learning. PMLR, 2022.
[2] Chen, Can, et al. "Bidirectional learning for offline infinite-width model-based optimization." Advances in Neural Information Processing Systems 35 (2022): 29454-29467.
[3] Trabucco, Brandon, et al. "Design-bench: Benchmarks for data-driven offline model-based optimization." International Conference on Machine Learning. PMLR, 2022.
[4] Hessel, Matteo, et al. "Rainbow: Combining improvements in deep reinforcement learning." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018.
[5] Bengio, Emmanuel, et al. "Flow network based generative models for non-iterative diverse candidate generation." Advances in Neural Information Processing Systems 34 (2021): 27381-27394.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the reply. I have read the rebuttals.
In principle, I enjoy it when simpler ideas outperform more complicated ideas and welcome it. However, the paper claims BootGen is a "novel algorithm" (line 41) then says "novel variation" line (43) and "novel bootstrapping strategy" (line 104). I don't agree it is a "novel algorithm" but agree it is a novel combination of ideas that out performs more complicated methods.
The novelty would be more understandable if the related works was written more carefully. In particular, section 2.1 on existing protein optimization methods ends by stating, "These methods make different assumptions on the cost of evaluating the ground truth function, e.g., online optimization, sample-efficient optimization, and offline optimization." Which is not appropriate given the similarity to CbAS as reviewer M7pC stated and the similarity to AdaLead as I have stated.
Furthermore, upon another read it is odd you only report diversity and novelty for UTR. Can the authors comment?
---
Reply to Comment 1.1.1:
Comment: Thank you for giving feedback on our paper.
We agree with your comment that our algorithm is a novel combination; we will relax our statement regarding the novelty explanation in the main text.
We will update related works more carefully, particularly comparing with CbAS, MIN, GFN-AL, AdaLead, etc., based on the discussion with you and reviewer M7pC.
Regarding the diversity and novelty analyses across different tasks, we have thoughtfully included the results in Appendix C. We have conducted a thorough comparison with GFN-AL [1], which is a prominent baseline that places emphasis on diversity and novelty. This meticulous evaluation helps us provide comprehensive insights into the comparative strengths of our approach.
[1] Jain, Moksh, et al. "Biological sequence design with gflownets." International Conference on Machine Learning. PMLR, 2022. | Summary: The authors propose a novel algorithm: bootstrapped training of score conditioned generators (BOOTGEN), for the offline design of biological sequences. The key idea is to enhance the score-conditioned generator by suggesting a novel variation of the classical ensemble strategy of bootstrapping and aggregating. The method requires training multiple generators using bootstrapped datasets from training and combining them with proxy models to create a reliable and diverse sampling solution.
Strengths: It is interesting the paper proposes a bootstrapping strategy to augment a training dataset with high-scoring samples that are collected from the score-conditioned generator and labeled using a proxy model.
Weaknesses: Experiments:
- The choice of baselines:
- The majority of baselines seem not biological sequence model specific (biological sequence design, e.g. https://openreview.net/forum?id=HklxbgBKvr and https://arxiv.org/abs/2006.03227, many others. ), instead the baselines are mostly general optimization methods. Applying a general method to a new application can lead to lower accuracy. The paper can be more convincing to compare with methods target on the same application.
- The rank based weighted score idea may be partially inspired by reference [41], which is rank-based weighting scheme for training unconditional generators See (sec 3.1). Why not compare performance against that score design?
Idea:
- The paper is focused on high scores samples.
Not including distribution very on median and lower score region, which might harm learning of the entire distribution. The negative samples may be helpful to learn the entire distribution, e.g. those in energy based model
- The idea (using sequence-to-score to enhance score-to-sequence) is a relatively common idea. It will be convincing to compare with similar results.
Ablation study:
- plotting the number of generators vs accuracy can be helpful to show the efficacy of the methods.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Comparison with Biological Sequence Model-specific Baselines
Thank you for your recommendation. We have already compared it with the recent state-of-the-art method GFN-AL ("Biological Sequence Design with GFlowNets") [1] in the main text, which is an optimization method for biological sequence design. Based on your suggestion, we also made a comparison with the DyNA-PPO (https://openreview.net/forum?id=HklxbgBKvr):
| | TFBind8 (100th Percentile) | TFBind8 (50th Percentile) |GFP (100th Percentile) | GFP (50th Percentile) |
| -------- | -------- | -------- |-------- | -------- |
| DyNa-PPO | 0.942 $\pm$ 0.025 | 0.562 $\pm$ 0.025 |0.790 $\pm$ 0.003 | 0.790 $\pm$ 0.005 |
| Ours | **0.979 $\pm$ 0.001** | **0.833 $\pm$ 0.007** | **0.865 $\pm$ 0.000** | **0.853 $\pm$ 0.004** |
Note many biological sequential design methods, including the papers you suggested (**Dyna-PPO**; https://openreview.net/forum?id=HklxbgBKvr, **P3BO**; https://arxiv.org/abs/2006.03227), rely on online (model-based) optimization, where continuous score function evaluations are required for model updates. In contrast, our benchmark focuses on offline design optimization, where access to the real-time Oracle score function is limited, and only the offline dataset is available for generating high-score samples.
---
### Response for "Why not compare the performance of an unconditional generator with a score-conditioned generator?"
Thank you for the insightful feedback. Here are the comparison results:
| | TFBind8 (Avg. Score) |
| -------- | -------- |
| RR + Unconditional Generator | 0.505 $\pm$ 0.010 |
| RR + Score-Conditional Generator | **0.662 $\pm$ 0.009** |
Note the RR stands for rank-based weighting. Our approach adapts the original rank-based weighting technique, designed for unconditional generators in latent space optimization, to work with score-conditional generators. By leveraging the score-conditional generator's ability to infer high-score regions based on query conditions, we can increment the generator's performance orthogonally. This enhancement allows for a more accurate focus on high-score regions. We have conducted thorough experiments comparing rank-based weighting and high-score conditioning, demonstrating their respective advantages and contributions to the quality of generated samples. These results offer valuable insights into the effectiveness of each approach.
---
Thank you for providing additional ideas for this method.
1. **Using lower score region** Our score-conditioned generator, denoted as $p(x|y)$, is trained not only on high-score data but also on low-score data. It aims to map low-score samples to their corresponding data points. By adding weighting to high-score samples, even lower-score samples are still considered probabilistically. The study aims not only to learn an accurate distribution from diverse score-related data but also to bias the distribution towards high-sample scores for design optimization and selecting good samples. It's crucial to consider reviewers' opinions and lower-score samples throughout this process, and I wholeheartedly agree with this approach.
2. **Comparing with common idea** First, we already empirically compare with prior common ideas (see Table 1 and Table 2). Our method builds upon the common idea of using sequence-to-score information to enhance score-to-sequence models, as studied by MIN [3] and GFN-AL [1]. In our comparisons with these methods (shown in Tables 1 and 2), our approach outperforms them significantly. The key difference from MIN is using a proxy model for training rather than inference, which is mitigated through diverse aggregation. Compared to GFN-AL, our method generates high scores and avoids underfitting issues in the low-score region. We will further discuss these differences to provide a comprehensive analysis of the strengths and weaknesses of each approach.
---
### Reponse for the additional ablation study
Thank you for the suggestion. We conducted an ablation study on the number of generators and their performance gain on the UTR task. The results of this study will be included in the main text, Chapter 4.4. For detailed findings, please refer to the attached PDF.
---
### References
[1] Jain, Moksh, et al. "Biological sequence design with gflownets." International Conference on Machine Learning. PMLR, 2022.
[2] Trabucco, Brandon, et al. "Design-bench: Benchmarks for data-driven offline model-based optimization." International Conference on Machine Learning. PMLR, 2022.
[3] Kumar, Aviral, and Sergey Levine. "Model inversion networks for model-based optimization." Advances in Neural Information Processing Systems 33 (2020): 5126-5137.
---
Rebuttal Comment 1.1:
Title: Reminder
Comment: I would like to inform you that there are approximately 17 hours remaining until the conclusion of the discussion period. Could you kindly let us know if all of your concerns have been addressed or if there are any remaining points of concern? Your response would greatly contribute to the progress of our paper. Thank you very much for your courteous cooperation. | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for providing valuable and constructive feedback on our work. We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. In response to each reviewer's comments, we have provided a detailed explanation and made necessary revisions to improve the quality of our work. Additionally, we have included a PDF file containing figures that present the results of the additional ablation study, as suggested.
We made two major responses for the two main points as overall comments:
1. **Novelty of this work.**
2. **Experimental fairness in terms of training speed and proxy model.**
---
### Novelty
Our novelty lies in the high-level algorithmic structure that combines simple yet intuitive low-level techniques for effective offline design optimization. Our high-level algorithmic structure is the reinterpretation/transformation of traditional statistical concepts, such as ensemble and bootstrapping, into deep learning forms for utilization in offline design optimization. The ensemble technique can be employed as an exploration strategy to effectively explore various modes while bootstrapping demonstrated its effectiveness as an exploitation strategy to reach high-score regions with limited samples. Particularly, through various experiments, it has been proven that the combination of these two simple yet robust concepts can serve as a powerful offline design optimization technique.
---
### Fairness of experiments
We acknowledge and appreciate the questions raised by some reviewers concerning two aspects of our approach: (a) the training time and (b) the usage of a proxy model, which they believe could potentially introduce unfairness.
However, we made fair experiments because:
1. Our approach exhibits a remarkably short training time, with each generator training process completed in approximately 3 minutes.
2. We used a simple proxy model, precisely the same as prior work [1].
Note that In offline biological sequence optimization tasks, the oracle score function is a significant bottleneck due to its computationally expensive nature; in the case of drug development, this is equivalent to the expensive testing of the drug itself, that can take days or even months. Efficiently addressing this bottleneck is therefore crucial for such offline biological design.
[1] Jain, Moksh, et al. "Biological sequence design with gflownets." International Conference on Machine Learning. PMLR, 2022.
Pdf: /pdf/3b4d1daa653767c335ce82e2062286a10e35c76c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Generative Noisy-Label Learning by Implicit Dicriminative Approximation with Partial Label Prior | Reject | Summary: This paper introduces a novel approach to tackle the challenge of noisy label learning through a generative framework. Firstly, it presents a new model optimization technique that establishes a direct association between the data and clean labels. Secondly, the generative model is implicitly estimated by leveraging a discriminative model, thereby eliminating the need for training a separate generative model and enhancing efficiency. Thirdly, the paper proposes an informative label prior inspired by partial label learning, which serves as a supervision signal for noisy label learning. Extensive experiments conducted on various noisy-label benchmarks demonstrate that the proposed generative model achieves state-of-the-art results. Remarkably, it achieves these results while maintaining a comparable computational complexity to discriminative models.
Strengths: - This paper introduces an informative prior for the latent clean label in noisy label learning, which is interesting.
- Experimental results show the effectiveness of the proposed method.
Weaknesses: 1. My concern is whether it is reasonable to build a generative noisy-label learning model, which only assumes that Y causes X. The previous work, where the latent feature Z and Y cause X, seems more reasonable.
2. From the perspective of the algorithm, it seems unnecessary to limit the algorithm to image data, while the notation at Line 138, Page 4 and the experiments focus on the image data, which is required to be explained or conduct more experiments on other kinds of datasets.
3. Some symbols need to be explained. For example, at Eq.(9), Page 5, $y_i(j)$ and $p_i(j)$ is very confusing. So is $|\mathcal{Y}|$ at Eq.(12), Page 5.
4. Some paragraphs, especially for the approach, need to be polished up to better explain how to address the proposed three issues in the abstract.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What is the practical scene of such a generative noisy-label learning model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors analyze the limitations in the conclusion of the paper. It is suggested that more explanations should be made on the reasonability of their generative model. Besides, some paragraphs need to be polished up.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer jVwM for the insightful comments.
> Z and Y cause X seems more reasonable
This is a reasonable concern. In principle, the latent feature $Z$ is important for generating $X$ in a causal relationship because the generative model would need $Z$ to "anchor" the $P(X|Y)$ modeling as there are infinite number of images that can be generated from the label. However, for the classification task, the generation ability of image does not help improve classification performance and latent feature $Z$ is not necessary. Furthermore, modeling $Z$ would require an extra module and extra computational resources.
Hence, we do not need $Z$ to anchor the image generation and instead represent $p(X|Y)$ with the discriminative model $q(Y|X)$, as explained in Eq.8. This means that we do not have an image generator in our model.
> Different kinds of datasets.
That is an excellent point, thanks for that. We have tested our method on two public NLP news topic classification tasks. Please see the attached PDF Table 2.
We selected DyGEN [1] as a recently proposed generative noisy label method as baseline. We followed the same hyper-parameter setup and architecture, and compared with all baselines. Our method outperforms other approaches in the NLP tasks.
> Symbols to be explained.
- $\mathbf{y_i}(j)$ is the $j_{th}$ label of one-hot latent clean label $\mathbf{y_i}$.
- $\mathbf{p_i}(j)$ is the $j_{th}$ label of one-hot clean label prior $\mathbf{p_i}$.
- Eq. 9: "Coverage" defines if latent clean label $\mathbf{y_i}$ is present in $\mathbf{p_i}$. "Uncertainty" defines how many labels are present in $\mathbf{p_i}$.
- Eq. 12: $|\mathcal{Y}|$ is the label space cardinality (i.e., number of labels).
> Some paragraphs, especially for the approach, need to be polished up to better explain how to address the proposed three issues in the abstract.
We will polish the approach with our arguments used in rebuttal. Here we listed the questions we proposed in abstract: Q1: Previous generative model optimizes the joint likelihood. Q2: Generative models are challenging to train. Q3: Uninformative clean label prior.
For question Q1 and Q2, this is due to intractability of direct optimization $P(X|Y)$ because the infinite number of samples can be generated from labels. In Eq.8, we define $P(X|Y)$ only on the finite number of training samples given by classification task. This makes directly optimise $P(X|Y)$ possible and solve Q1. Furthermore, this allows us to optimise standard discriminative model in generative goal, which solves Q2.
For Q3, the variational posterior $q(Y|X)$ depends on the modeling of latent clean label $Y$. Motivated by [3], we represent $Y$ with a partial label distribution $P(Y)$ and define clean-label coverage and uncertainty for constructing informativeness partial label, which solves Q3.
> Practical scene of generative noisy-label learning
- Generative noisy label learning is a promising way of modeling the label transition matrix, as discussed by CausalNL[2].
- Our proposed method improves upon this idea by simplifying the generative part of the method and significantly improve classification accuracy.
- Our method combines generative modeling, noisy label learning and partial label learning in a unified framework.
***
[1] DyGen: Learning from Noisy Labels via Dynamics-Enhanced Generative Modeling, KDD 2023
[2] Instance-dependent Label-noise Learning under a Structural Causal Model, NeurIPS 2021
[3] Decompositional Generation Process for Instance-Dependent Partial Label Learning, ICLR 2023
---
Rebuttal Comment 1.1:
Title: About the rebuttal
Comment: 1. The authors have address most of my concerns.
2. Could you provide some evidence about the argument in the rebuttal that "for the classification task, the generation ability of image does not help improve classification performance and latent feature is not necessary"?
---
Reply to Comment 1.1.1:
Title: Evidence of the argument
Comment: We thanks for Reviewer jVwM reply.
> Evidence of image generation does not help improve classification performance.
we refer to "semi-supervised with GAN" as a close research field that combines generative model and discriminative task.
- [1] observed that feature matching generator obtains better semi-supervised performance while generate poor images.
- [2] also observed that model generated better images but failed to improve semi-supervised performance.
- [3] showed that a perfect generator (generating images that exactly matches the input distribution) does not improve generalization performance under semi-supervised setup.
Although semi-supervised task is different from noisy label learning, they are both classification task that aims to find decision boundary. We believe this could serve as an evidence for "image generation does not help improve classification performance".
> Evidence of latent feature is not necessary.
There are other methods does not include $Z$ in their generative process (NPC[4], DyGEN [5]). As stated in [4]:
- Modeling $Z$ for $X$ with large resolution leads to sub-optimal reconstruction, which is a common issue for generative modeling (VAE).
- Because $Z$ and $Y$ jointly generate $X$, they need to be disentangled for classification task. And such disentanglement is not the main goal of noisy label classification.
---
[1] Improved techniques for training gans, NeurIPS 2016
[2] Adversarial generator-encoder networks, AAAI 2018
[3] Good Semi-supervised Learning That Requires a Bad GAN, NeurIPS 2017
[4] From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model, ICML 2022
[5] DyGen: Learning from Noisy Labels via Dynamics-Enhanced Generative Modeling, KDD 2023 | Summary: This paper discusses the solution of learning with noisy labels by directly optimizing P(X|Y) relying on associating the data with clean labels directly. A informative label prior is derived with experimental results on several benchmark datasets.
Strengths: The author derive a solution for generative noisy label learning under some very strong and unrealistic assumptions.
Weaknesses: (1) The paper is very difficult to understand partially because the method is not well motivated. For example, there are many generative model based noisy label learning methods such as
(i) Label-Noise Robust Generative Adversarial Networks, CVPR 2019
(ii) From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model, ICML 2022.
Moreover, the approaches that donot directly optimize the P(X|Y) usually because it is not tractable under general conditions. This paper is adding many strong but unrealistic assumptions to directly optimize P(X|Y) may not sound like a reasonable approach. Namely the contribution point 1 is not clear.
(2) I have strong doubt about the theoretical soundness of the paper. For instance, in the equation (10), why the clean label prior can be defined as the linear combination of the three terms \tilde{y}_i, c(i) and u_{i}(j), as these three terms may not be mutually exclusive and they may overlap with each other.
(3) Another issue is in the equation (12), the authors said “ the label ui is obtained by sampling from a uniform distribution of all possible labels proportionally to its probability of representing a noisy-label sample”, this assumption is way too strong and cannot be true. As a matter of fact, there are very few cases that the label distributions are uniform. Frequently, the noisy label distributions are highly imbalanced. Please take a look at the reference [ii] in ICML 2022, the usual way to assume noisy label distribution is multinomial instead of uniform distribution.
(4) The contribution is not clear. Besides the contribution 1 is not really a contribution, in the second point of their claimed contribution, they said “Our generative model is implicitly estimated with a discriminative model, making it computationally more efficient than previous generative approaches.”
This is also not valid as there are many generative adversarial network based noisy labeling which applied both generative model and discriminative model and let them collaborate with each other (for instance, the reference [i]) Thus, the second point is also very ordinary and not novel as well.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: There are many issues in this paper including poor theoretical ground, unrealistic and wrong assumption as well as very limited novelty.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: As mentioned before, as the contribution of this paper is not clear. Both of the points (1) and (2) are actually done by previous work with a wider scope and less strict assumption.
In addition to that, assuming the noisy label distribution to be uniform is too restrictive, making the solution has little usage.
Last but not least, the solution is derived based on very strong and unrealistic assumption such as equations (10) and (12), making the experimental results not convincing.
In summary, the paper is too far away from the level of a Neurips paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 1: Very Strong Reject: For instance, a paper with trivial results or unaddressed ethical considerations
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We disagree with most points from Reviewer rUmF and provide details below to support our position.
> Strong but unrealistic assumptions to directly optimize $P(X|Y)$
To explain the assumptions to optimise $P(X|Y)$, we first need to consider its intractability, which is in part due to the infinite number of image samples $\mathbf{x}$ that can be generated from their clean labels $\mathbf{y}$. One solution to mitigate such intractability is the use of a latent image representation $Z$ to "anchor" the image generation process ([CausalNL1] and InstanceGM[2]). However, note that such image generation process is in fact irrelevant for the noisy label classification task that we aim to solve. Image generation ability from noisy labels may be important for some papers, such as “Label-Noise Robust Generative Adversarial Networks, CVPR 2019” cited by Reviewer rUmF, but the goal of that paper is misaligned with our goal (noisy-label classification). Their experiments also show little relevance to our task.
For the classification task that we address in our paper, we assume that $Z$ is unnecessary and that $P(X|Y)$ is defined only on the finite number of training samples given by classification task. These assumptions facilitate the direct optimisation of $P(X|Y)$ and alleviate the problematic training of an image generator, particularly when $X$ is large (as noticed by Reviewer 9Aeh). All of our experiments, tested under synthetic and real-world datasets, show significantly improvement over previous generative and discriminative methods, both in terms of accuracy and efficiency.
Given our arguments above and the superior experimental results, we respectfully disagree with Reviewer rUmF's comment that our assumption is "strong and unrealistic".
> Why the clean label prior can be defined as the linear combination of the three terms.
Given that clean labels are latent, our goal is to formulate an instance-wise partial label prior with multiple candidate labels that have high likelihood of containing the clean label. Motivated by [3], partial label can be factorized into one ground truth label and multiple complementary labels. For noisy label learning, we propose a formulation based on maximising label coverage and minimising label uncertainty, which is achieved with the combination of the training noisy label $\tilde{\mathbf{y_i}}$, the label coverage term $\mathbf{c_i}$, and an uncertainty term $\mathbf{u_i}$.
> Three terms may overlap.
The reviewer is correct in saying that these three terms may not be mutually exclusive and may overlap. Because the overlapping likely means that the method found the clean label for the instance. In that case, our label coverage is maximised to full coverage and uncertainty is minimised to a one-hot label, which is the ideal case for clean label. It is unclear why this overlap would be a problem for the theoretical soundness of the paper.
> Uniform noisy label distribution is wrong. Please take a look at the reference NPC[4] in ICML 2022, the usual way to assume noisy label distribution is multinomial instead of uniform distribution.
We agree with Reviewer rUmF on this point. In fact, the label coverage term $\mathbf{c_i}$ in $p(\mathbf{y})$ is obtained by sampling from a multinomial distribution. This is the same as NPC[4] for noisy label distribution. The label uncertainty term $\mathbf{u_i}$ is sampled from a uniform label distribution to smooth the partial label prior based on the likelihood $w_i$ of the sample being noisy-labelled, where low $w_i$ implies higher uncertainty, producing a more uniform $p(\mathbf{y})$to better regularise the training process.
> The claim that the generative model is implicitly estimated with a discriminative model is ordinary and not novel.
As far as we are aware, **all** single-stage (i.e., end-to-end training) generative noisy-label learning methods previously proposed in the field depend on the generation of images or generation of low-dimensional image representations. Ours is the first single-stage generative noisy-label learning method that **only** depends on a discriminative model and a transition matrix, making the run-time complexity of our method similar to the complexity of discriminative models and much smaller than the complexity of previously proposed generative models. Please note in Table 7 that our method is as efficient as training a simple CE-loss approach, and extremely more efficient than SOTA discriminative models DivideMix[5] and generative models [1,2].
***
[1] Instance-dependent Label-noise Learning under a Structural Causal Model, NeurIPS 2021
[2] Instance-dependent noisy label learning via graphical modelling, WACV 2022
[3] Decompositional Generation Process for Instance-Dependent Partial Label Learning, ICLR 2023
[4] From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model, ICML 2022
[5] Dividemix: Learning with noisy labels as semi-supervised learning, ICLR 2020
---
Rebuttal Comment 1.1:
Title: About the rebuttal
Comment: Thanks for the response. I have read over the rebuttal.
However, I feel that as significantly simplifying the assumption of the noise label distribution to be uniform is way too strong and limits the usage of the results of the paper. Essentially, all (most) of the experimental results are based on this strong and unrealistic assumption which makes the work less interesting. As a matter of fact, the accurate modeling of the noise label distribution is most difficult part of the problem which I expect to be resolved or partially resolved.
Given the fact that a more realistic noise label distribution has been proposed "From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model, ICML 2022" with multinomial distribution. The uniform distribution is only a special case of multinomial distribution. I really donot understand that why we need to do another analysis with the special case presented here.
Thus, I maintain my original score.
One reviewer.
---
Reply to Comment 1.1.1:
Title: We did not assume noisy label distribution is uniform
Comment: We believe there are misunderstandings of our paper, thus we need to clarify that: **We assume noisy label distribution is multinomial, just like NPC did**.
- The prior label $p_i(j)$ we constructed contains multiple labels (Eq. 10). The second term **label coverage $c_i(j)$** is where we model noisy label distribution and expect it matches clean label distribution, which is **a multinomial distribution sampled once (Categorical)** (Eq. 11 and Line 183). Furthermore, we prove in ablation Fig. 3, clean label coverage keeps increasing over the training. If we really assume noisy label distribution is uniform, this metric would be fix value.
- The third term **label uncertainty $u_i(j)$** is indeed sampled from uniform distribution. However, the role of this term is never model noisy label distribution but **compensate how likely the sample is noisy-labelled**. As we showed in Eq. 10, Z is a normalisation factor that makes $p_i(j)$ sum to 1. The more likely $\tilde{y}_i$ is noisy label ($w_i$ close to 1), the more labels $u_i(j)$ will be sampled and the more uniform $p_i(j)$ becomes. This results in less confidence in prior label compared with one-hot labels and reduce the fitting speed for noisy labelled samples.
- As shown in *Decompositional Generation Process for Instance-Dependent Partial Label Learning, ICLR 2022*, the prior label we constructed is reasonable in partial label learning and justify for noisy label learning.
There are many other questions proposed by Reviewer rUmF original comment. Does our rebuttal address reviewer's concerns? | Summary: This paper focuses on improving the efficiency of the generative model in the context of learning noisy labels. To achieve this, the authors first introduce a generative framework whose loglikelihood given a variational posterior can be extended into a label transition term and two KL-divergence terms. Then, the authors demonstrate that the optima of one of the KL-divergence terms could be used to transform a discriminative model into an implicit generative model without extra computation cost.
To further improve the performance of the proposed framework, the authors also present an informative label prior which combines benefits from both the high coverage (sample from Categorical distribution) and the low uncertainty (sample from Uniform distribution) as well as the information contains in the noisy labels.
The proposed work has two main contributions: 1) derive a KL-divergence term from the generative model that allows a discriminative model to be transformed into an implicit generative model, which guarantees the performance of a generative model and the efficiency of a discriminative model at the same time for learning noisy labels; 2) propose a novel clean label prior which allows a tradeoff between the label coverage and uncertainty. Although the second contribution is not as novel in terms of its simplicity, the visualization of its coverage and uncertainty enhances its impact.
The results of the wide range of experiments on benchmark datasets for noisy labels demonstrate the effectiveness of the proposed framework. The ablation studies are also carefully designed and conducted to validate the contribution of each component of the framework.
###########################################################################
######################### Post Rebuttal ######################################
###########################################################################
The authors have addressed all of my concerns. I am happy to raise my score from 6 to 7.
Strengths: The paper is well-presented and easy to follow, with detailed descriptions of each component. The ablation of the framework and the analysis for the proposed clean prior are well performed. The performance gains on CIFAR benchmarks are significant, especially when the instance-dependent noise ratio is high. This indicates that the proposed model could indeed improve the model's robustness against noisy labels.
Minimizing the KL-divergence term KLD( q(y|x) || p(x|y)p(y) ) to estimate the generative model parameters using discriminative model parameters is novel. In addition, the contribution regarding the clean label prior makes this paper a valuable addition to the community.
Weaknesses: Minor:
1. The authors should elaborate more on how the modeling P(X|Y) contributes to the informativeness of noisy labels in the introduction part.
2. Please check the references. Some of them are incomplete (e.g., no journal or conference name for [37]).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In the related work, the authors mention that the estimation of the label transition matrix is troublesome. Based on equation (7), the estimation of the label transition matrix is still required for the proposed framework. I think the authors did not mention how their optimization of the label transition matrix (the cross-entropy term) differed from the existing work. So, how does the proposed framework alleviate the problems faced by label transition estimation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 6r3h for the insightful comments.
> Modeling P(X|Y) contributes to the informativeness of noisy labels.
As shown in Eq.5, $P(X|Y)$ is modelled with the variational posterior $q(Y|X)$. In turn, $q(Y|X)$ depends on the modelling of the latent clean label $Y$. Inspired by partial label learning, we represent $Y$ with a partial label distribution $P(Y)$ that can be seamlessly integrated as supervisory signal without significant modify the framework. Motivated by [1], we define the informativeness of $P(Y)$ as a clean-label coverage of the true label and a regularisation for the complementary labels (i.e., all other labels)
> References incomplete.
We will fix this PiCO[2].
> how does the proposed framework alleviate the problems faced by label transition estimation?
The main issue faced by noisy-label learning methods that rely on label transition estimation is the identifiability of the transition matrix, which forces researchers to make compromising assumptions, such as masking, separability, rankability, clusterability, or the existence of anchor points. Recall that our causal generative process in Fig. 1 allows the factorisation of the joint distribution $P(\tilde{Y},X,Y)$ as $P(\tilde{Y},X,Y) = P(Y)P(X|Y)P(\tilde{Y}|Y,X)$. The dependence on $P(X|Y)$ in the equation above will reduce the uncertainty of the distribution $P(\tilde{Y}|Y,X)$ and encourage the identifiability of the transition relationship, as indicated in CausalNL[3] without making any compromising assumption.
***
[1] Decompositional Generation Process for Instance-Dependent Partial Label Learning, ICLR 2023
[2] Pico: Contrastive label disambiguation for robust partial label learning, ICLR 2022
[3] Instance-dependent Label-noise Learning under a Structural Causal Model, NeurIPS 2021
---
Rebuttal Comment 1.1:
Title: About the Rebuttal
Comment: I would like to thank the author for the detailed response to my concerns.
Now I understand how the proposed causal generative process alleviate the problems face by label transition estimation. Modelling the joint distribution by factorizing it with the dependence on P(X|Y) can indeed help to identify the transition relationship.
However, the author did not address my concern regarding how modelling P(X|Y) improve the informativeness of noisy labels. I understand that modelling P(X|Y) will improve the robustness of the model against the noisy label but how such process affect the generation of the noisy label?
Reviewer 6r3h
---
Reply to Comment 1.1.1:
Comment: We thanks for Reviewer 6r3h reply.
> Transition matrix estimation
We are pleased that reviewer recognized our argument. The explanation will be updated in manuscript.
> Informativeness of noisy label
We feel like there are some confusing points. The informativeness we mentioned in the introduction part is with *clean label prior* $P(Y)$ (Line 51, 58, 76-78), whether it is non-informative (uniform distribution as previous methods used) or informative (our methods). But it is not with noisy label.
> $P(X|Y)$ and generation of noisy label
The generation of noisy label is assumed to be instance-dependent (Fig. 2), under the joint effect of $X$ and $Y$. Without modeling $P(X|Y)$, $P(X)$ and $P(Y)$ need to be estimated separately and $P(X)$ is certainly a generative term which requires further decomposition (like latent variable $Z$ for common generative task).
With the help of modeling $P(X|Y)$ directly, the noisy label generation only requires estimating $P(Y)$ and $P(X)$ can be approximated by $P(Y)$. In other words, $P(X|Y)$ reduces the instance-dependent noise modeling from double variables to single variables, which significantly save the computation cost. | Summary: Most previous works address learning with noisy labels with discriminative models while this paper takes the generative approach which maximizes directly on associated data and clean labels. This generative model is implicitly estimated with a discriminative model, making it computationally more efficient.
Strengths: - This paper successfully addresses several issues that might exist for approaches rely on generative models e.g. challenging to train and tend to use uninformative clean label priors.
- The experiments are extensive with results on many datasets with both realistic and synthetic label noise. All results show the effectiveness of the proposed method.
Weaknesses: - The method is not well motivated. Even though most previous methods often adopt discriminative models and generative models are less discussed, it is still very hard for the reviewer to understand why generative is better than discriminative models for noisy label problems.
- The author argues that the small loss hypothesis offers little guarantee of successfully selecting clean-label samples, however, this hypothesis is very related to the early-learning phenomenon in which the clean labels tend to fit earlier in the training than the noisy labels. This paper still uses this early learning to estimate the clean label prior.
- Figure 2 is difficult to read and it does not make understanding the method any easier, the presentation of the paper should be significantly improved.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How does this generative method deal with the case when input $X$ is very large?
- Is there any intuition why the reversed KL loss eventually produces better results?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 9Aeh for the insightful comments.
> Hard for the reviewer to understand why generative is better than discriminative models for noisy label problems.
It is still unclear which method is more suitable for noisy label learning, generative or discriminative. In fact, generative method is less discussed because it requires auxiliary generative module compared with discriminative, as noted by Reviewer 9Aeh. However, as discussed in CausalNL [1], generative models encourages the identifiability of the transition matrix by optimising $p(X |Y )$. In addition, the generative formulation does not require any additional regularisation [2,3,4] for transition matrix estimation. Our method enjoys the advantages from both sides because we optimize in generative goal but train in discriminative approach. Nevertheless, we believe that the debate of which method (generative or discriminative) is more appropriate to noisy-label learning still needs further study.
> This paper still uses early learning to estimate the clean label prior.
We agree, but it is important to emphasis that we do not use $w_i$ to select clean-label samples, like previously published papers. Instead, $w_i$ works as a weight to the label prior uncertainty regularisation, where low $w_i$ implies higher uncertainty, producing a more uniform $p(y)$ and a slower training convergence compared to a high $w_i$ that produces a $p(y)$ closer to a one-hot distribution, which leads to a faster training convergence.
> Figure 2 is difficult to read.
We have updated Figure 2 in the attached PDF.
> input $X$ is very large?
The image resolution of $X$ does not have a significant impact on our framework since we approximate the generative model with a discriminative approach, as explained in Eq. 8. Hence, we do not actually generate images during training. The only impact that a large $X$ could have in our model would be in terms of the size of the mini batch for training, which is the same impact as for other discriminative models.
> Reversed KL works better?
The reverse $\mathsf{KL} \left[c_i\times \frac{g_{\theta}(\mathbf{x_i})}
{\sum_j g_{\theta}(\mathbf{x_j})}\odot \mathbf{p_i} \Big \| g_{\theta}(\mathbf{x_i}) \right]$
can be decomposed into the negative entropy of the first term plus the cross entropy between the first and second terms.
So, by minimising this loss, we aim to maximise the entropy of the first term and the cross entropy between the first and second terms.
For samples where the training is certain about their training labels, with $\tilde{\mathbf{y_i}} \approx \mathbf{c_i}$ and $w_i$ close to 1 , $\mathbf{p_i}$ will be close to a one-hot label.
In this case, the reverse KL will have a constant entropy value for the first term and a standard cross-entropy loss for the second term.
On the other hand, the original KL loss
always maximises the entropy of $g_{\theta}(\mathbf{x_i})$, and the cross entropy between $g_{\theta}(\mathbf{x_i})$ and $c_i\times \frac{ g_{\theta}(\mathbf{x_i})}{\sum_j g_{\theta}(\mathbf{x_j})} \odot \mathbf{p_i}$ independently if the training is certain about the sample's training label.
The subtle difference of the reverse KL loss leads to a better gradient landscape and better training results.
***
[1] Instance-dependent Label-noise Learning under a Structural Causal Model, NeurIPS 2021
[2] Instance-dependent label-noise learning with manifold-regularized transition matrix estimation, CVPR 2023
[3] Provably end-to-end label-noise learning without anchor points, ICML 2022
[4] Estimating instance-dependent bayes-label transition matrix using a deep neural network, ICML 2022
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, and thank you for answering my questions.
I feel the weak points of the paper mainly stem from the motivations:
First, the motivation for proposing a generative model is not sufficient, since as authors agreed that there is no obvious advantage of the generative model over the previous discriminative model. And in order to get this generative model to work, assumptions are made, and tricks are included, making the method looks complex.
Second, part of the motivation is based on previous methods that rely on the small-loss hypothesis to select samples, while this paper uses early learning to estimate clean labels, from the reviewer's point of view, these two methods are fundamentally based on similar assumptions -- the clean labels are learned first.
Based on these weaknesses, I will keep my original score.
---
Reply to Comment 1.1.1:
Comment: > Advantage of generative modeling and our framework.
It is hard to find evidence with limited literatures on generative noisy label learning (CausalNL/InstacenGM and NPC are the only related works we aware). These methods prompt identifiability of transition matrix estimation naturally by optimizing $P(X|Y)$, as shown in CausalNL. But the burden is extra generative module (VAE, GAN).
Similar discussion has been made in other research areas, include semantic segmentation [1], uncertainty estimation [2] and OOD detection [3]. We believe the advantages of generative modeling is still under exploration and our framework provides a less expensive way to achieve this goal.
> Our method is complex.
Our framework derives from Bayes rule (Eq. 4-7) with one straightforward assumption. It builds a plausible approach for optimizing generative model with no extra cost. Furthermore, our method unifies noisy label learning, generative optimization and partial label learning in a single framework (as discussed with Reviewer jVwM). We achieve competitive result both in performance and efficiency. Thus, we respectfully believe our method is not complex.
> Early learning to estimate clean labels
We agree with reviewer on this point. However, we also showed results by *not using early learning* in Supplementary file Tab. 2 label coverage with different $\beta$. By setting $\beta=0$ or $\beta=0.5$, we excluded early-learning in our framework and still achieved reasonable performance.
Anyway, we thanks for Reviewer 9Aeh valuable comments. We will update our paper with better motivation claim.
---
[1] GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models, NeurIPS 2022
[2] Generative classifiers as a basis for trustworthy image classification, CVPR 2021
[3] Input complexity and out-of-distribution detection with likelihood-based generative models, ICLR 2019 | Rebuttal 1:
Rebuttal:
- **Reviewer 9Aeh, Fig.2 is difficult to read**. We have uploaded a new figure to describe our proposed framework more clearly.
- **Reviewer jVwM, result from other kinds of dataset**. we have uploaded new results from two public NLP news topic classification benchmarks. The baselines are selected from "DyGen: Learning from Noisy Labels via Dynamics-Enhanced Generative Modeling, KDD 2023".
Pdf: /pdf/2d5d80e114b8f67e26f21fdc096225f8d550ae95.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Rotating Features for Object Discovery | Accept (oral) | Summary: CAEs promise to resolve some of the concerns of slot-based representation. They bring the promise of flexible object granularity, the promise of extracting part-whole hierarchies as per need, and faster training speeds. However, the original CAE was tested with grayscale images and on a rather small number of objects (2-3). This paper asks the question, *how can we scale CAEs* beyond this.
**Approach.** Compared to classical neural networks, in the proposed model, all activations in the network are “tagged” with the binding information. For this, every scalar value of conventional vector representation is replaced with a $N$ length vector: the magnitude of this vector plays the role of classical activation while its orientation plays the role of capturing the binding information. Compared to original CAEs, the proposed model provides a generalization from having just 2 components (real and imaginary) to $N$ components, where $N$ can be arbitrarily large.
In **experiments**,
1. The paper tests whether this change can help improve the capacity of CAEs in terms of the number of objects or not.
2. The paper is also one of the first works to scale CAEs to real scenes.
3. The paper also proposes a new procedure for extracting object segments from distributed representations.
Strengths: 1. (One of the) First successful attempts to make a CAE-like model scale to more objects and beyond simplistic grayscale images.
2. The way weights and biases were applied seems to have been simplified (a welcome change) compared to the original CAEs. An ablation experiment highlighting this specific change for the case of $N=2$ could be useful though.
3. Seems applicable to any data modality— demonstrated to some extent via the applicability of the model on RGB-D data in addition to RGB data.
Weaknesses: 1. One of the novelty seems to be the use of DINO pre-trained features for the first time in the context of CAEs. Therefore, a comparison with and without DINO pre-training would be useful to show in the main paper.
2. Can original CAE be a baseline in the real-data experiment?
3. The paper makes the jump from 4 object scenes directly to 10 object scenes. It may be useful to test perhaps a more gradual increase e.g., by also testing 7 object scenes and 13 object scenes, whether it places gradually more demand on the choice of $N$.
4. It would be interesting to see the performance on a standard dataset like Tetris or CLEVR that was not deliberately designed for this paper. That being said, it need not outperform the previous slot-based methods here considering various other potential benefits of CAEs.
5. At test time, can it handle a larger number of objects than shown during training?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Is the layer choice important for segmentation? Do these patterns emerge also for the intermediate layers?
2. I am curious what is the incentive for the model to assign different “phases” to different objects? In slot-based models, this role was played by the inherent capacity bottleneck of slots. I am curious about some insights about this.
3. I could not entirely follow the motivation of Appendix D.3 “Object separation within the bottleneck of autoencoder utilizing Rotating Features…”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the paper discusses the limitations, but one should also consider the potential benefits of the CAE framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review. We are taking this opportunity to address the concerns and inquiries raised.
### Strengths
**2) The way weights and biases were applied seems to have been simplified (a welcome change). An ablation experiment highlighting this specific change for the case of n=2 could be useful.**
Figure 4 in the paper shows that when the CAE model is compared to Rotating Features with $n=2$, they achieve equivalent object discovery performances. This suggests that the modified rotation mechanism does not impact the performance significantly.
### Weaknesses
**1/2) A comparison with and without DINO pre-training would be useful to show in the main paper. / Can original CAE be a baseline in the real-data experiment?**
In line with the reviewer's suggestion, we ran two additional baselines on the Pascal VOC dataset. We firstly applied the Rotating Features model directly to the raw input images, referred to as RF-DINO. Secondly, we applied a CAE model to the DINO preprocessed features, while incorporating our novel evaluation procedure (CAE* +DINO).
|Model|MBO$_i$|MBO$_c$|
|--|--|--|
|RF-DINO|0.282$\pm$0.006|0.320$\pm$0.006|
|CAE* +DINO|0.329$\pm$0.009|0.374$\pm$0.010|
|RF+DINO |0.407$\pm$0.001|0.460$\pm$0.001|
The results reveal that our proposed approach of applying Rotating Features to DINO features (RF+DINO) significantly outperforms both baselines. This highlights the importance of our contributions: generalizing the complex-valued features to higher dimensions and using DINO features as the input to our model are both essential to achieving competitive object discovery performance on real-world images. We will include these baselines in the revised paper.
**3) The paper makes the jump from 4 object scenes directly to 10 object scenes. It may be useful to test a more gradual increase.**
In response to the reviewer's suggestion, we conduct a new experiment to assess how the choice of $n$ impacts the object discovery performance of a Rotating Features model depending on the number of objects in a scene. We accomplish this by creating variations of the 10Shapes dataset that contain between two and ten objects. For each variant, the number of objects per image equals the total number of distinct objects throughout the respective dataset.
As illustrated in Figure 2 in the PDF uploaded alongside the general response, the object discovery performance significantly drops as the number of objects increase when $n=2$. However, the performance remains consistently high when $n=10$. These findings indicate that the choice of $n$ becomes increasingly critical as the number of objects rises. We will include this experiment in the revised paper.
**4) It would be interesting to see the performance on a standard dataset.**
In response to the reviewer's suggestion, we evaluate the performance of the proposed Rotating Features model on the Multi-dSprites and CLEVR datasets, comparing it to its predecessor — the CAE model [2]. The outcomes are presented in the table below. Note that since the original CAE was only applicable to grayscale images, we combine it with our proposed evaluation procedure to make it applicable to multi-channel input images, which we denote as CAE*.
|Dataset|Model|ARI-BG|
|--|--|--|
|Multi-dSprites|CAE*|0.371$\pm$0.056|
||Rotating Features|0.888$\pm$0.015|
|CLEVR|CAE*|0.289$\pm$0.042|
||Rotating Features|0.664$\pm$0.013|
The results indicate that the Rotating Features model significantly surpasses the performance of the CAE*, demonstrating good object separation on these two datasets. However, the results still lack behind the state-of-the-art. The qualitative examples in Figure 3 of the PDF uploaded alongside the general response show that Rotating Features encounter the same issue here, as we have seen with the colored 4Shapes dataset: objects of the same color tend to be grouped together. As demonstrated with the RGB-D version of the colored 4Shapes dataset and our results using pretrained DINO features on real-world images, this issue can be addressed by using higher-level input features.
**5) At test time, can it handle a larger number of objects than shown during training?**
See the general response above.
### Questions
**1) Is the layer choice important for segmentation? Do these patterns emerge also for the intermediate layers?**
For object segmentation using Rotating Features, the current architectural layout suggests the output layer's representation as the most logical choice, as it has the highest spatial resolution.
Nevertheless, it is interesting to note that object-centric features also emerge in the intermediate layers of the architecture. In Appendix D.3, we examine the object separation within the autoencoder's bottleneck. This experiment reveals that it is possible to distinguish the representations of two separate objects within this layer's representation.
**2) What is the incentive for the model to assign different phases to different objects?**
The binding mechanism enables the model to process information relatively independently of one another. If features of one object have orientations pointing in a different direction than features of another object, their features can be processed with little interference. This mechanism encourages the model to assign different orientations to features that it aims to process separately, which naturally leads to object-centric representations.
**3) I could not entirely follow the motivation of Appendix D.3.**
This experiment is related to question 1 above, and investigates whether object separation also emerges in intermediate layers. Since it is not practical to evaluate the object segmentation performance here, we develop an alternative way to investigate object-centricity. Our approach in this case is a weakly-supervised semantic segmentation setup, which tests the alignment between the intermediate object representations and their counterparts at the output layer.
---
Rebuttal Comment 1.1:
Title: Thank You
Comment: Thank you for the rebuttal. The results on CLEVR seem much better relative to CAE and would be nice to include in the paper IMO. I also appreciate testing a gradually increasing number of objects.
I change my rating to 7. | Summary: This work proposes a novel approach to unsupervised object discovery that does not depend on slots. Instead, the model uses an extra set of dimensions to code object assignment based on rotation, potentially allowing for a more flexible distributed form of object discovery than in standard slot-based approaches. The model is shown to perform well on toy tasks, with promising results on real-world images.
Strengths: - This work presents an interesting new direction, substantially reconsidering the problem of object discovery relative to the now ubiquitous slot-based methods.
- The method performs well on toy tasks, scaling to a relatively large number of objects, and also shows promising results on real-world images.
- The method is significantly more efficient than popular slot-based methods.
- The paper stimulates many interesting directions for future work.
Weaknesses: I have a number of suggestions and questions that may help to further improve the paper:
- A major potential advantage of the proposed approach is that it is more flexible than slot-based methods, specifically regarding the number of objects that can be segmented. The results in supplementary figure 10 suggest that, with a sufficient number of dimensions (e.g. ~10) a relatively large number of objects can be represented, and that even more objects can be represented without needing to add many more dimensions. Can this advantage over slot attention be empirically demonstrated? For instance, are there any instances in pascal VOC or FoodSeg that involve more objects than the number of slots in DINOSAUR? If so, it would be interesting to see whether rotating features outperforms DINOSAUR on those problems. Alternatively, controlled experiments with CLEVR, or even the Nshapes dataset, could be performed to demonstrate the superior representational capacity of rotating features over slot attention (especially when there are more objects than slots).
- A desirable feature of slot attention is that it is permutation invariant, which allows for a dynamic binding of features to objects (i.e. by randomly initializing the slots and iteratively refining them through competition). Here, by contrast, particular features have learned biases toward particular orientations, does this interfere with the ability of the model to perform variable-binding in a dynamic manner? How would the model perform when tested on a greater number of objects than it was trained on (given that it has a sufficient number of dimensions to represent those objects)? In other words, is the method more efficient than slot attention because it is also less flexible?
- Have the authors considered sharing the orientation across features at each location in a convolutional feature map? Intuitively, it seems that all features at a particular location should only be assigned to a single object, rather than having a unique assignment of each feature.
- Have the authors investigated how the assignment of objects evolves across layers? I am wondering whether the competition that occurs over time in methods like slot attention is somehow distributed across layers in this model.
- It would be informative to include an ablation of the binding mechanism. I also found the description of this mechanism somewhat counterintuitive. It is described as 'weakening the connections between features with dissimilar orientations', but it almost seems as if it does the exact opposite of this. The magnitude of the features in $\chi$ is completely unrelated to the orientations of the inputs. Therefore, it seems that mixing $\chi$ with $\psi$ is only weakening the influence of orientation, allowing incoming features with dissimilar orientations (but high synaptic weight values) to exercise a greater influence on the magnitude of the feature representation. Can the authors add some additional explanation of this mechanism?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I have listed some questions in the previous section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: There are no discernible negative societal impacts related to this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review and constructive feedback. We welcome the opportunity to address the questions and concerns that have been brought up.
**1 / 2) How would the model perform when tested on a greater number of objects than it was trained on? / Controlled experiments could be performed to demonstrate the superior representational capacity of rotating features over slot attention (especially when there are more objects than slots).**
See the general response above.
**3) Have the authors considered sharing the orientation across features at each location in a convolutional feature map?**
We have experimented with this, but in preliminary results it generally performed worse. Intuitively, we believe it is helpful to have the possibility for each individual feature to have its own orientation (and therefore object binding), as this allows overlapping objects to be represented simultaneously within the same location. This becomes increasingly important as we reduce the feature map size in the architecture, and essential as we move through the fully-connected layers.
**4) Have the authors investigated how the assignment of objects evolves across layers? I am wondering whether the competition that occurs over time in methods like slot attention is somehow distributed across layers in this model.**
This hypothesis could indeed be correct. In Appendix D.3, we have included an experiment that examines the object separation within the architecture's bottleneck (i.e., after the encoder). While we find a meaningful separation here, it does not seem to be prominent enough, yet, to be able to separate more than two objects at a time. Exploring whether this is indeed a result of some form of competition distributed across layers, and how to modify the network to achieve stronger object separation throughout the entire architecture, would be an intriguing direction for future research.
**5) It would be informative to include an ablation of the binding mechanism. I also found the description of this mechanism somewhat counterintuitive.**
Following the reviewer's suggestion, we performed an ablation of the binding mechanism. In this analysis, we modify the Rotating Features model by substituting $\mathbf{m}_{\text{bind}}$ with $\left\lVert{\psi}\right\rVert_2$ in Equation 4. This effectively removes the binding mechanism ($\chi$). Then, we apply the adjusted model to the grayscale 4Shapes dataset. While the original model achieves an ARI-BG score of $0.987 \pm 0.003$ on this dataset, the ablated model fails to learn any object separation ($0.059 \pm 0.017$). This result highlights the critical role the binding mechanism plays in enabling the Rotating Features to learn object-centric representations. We will include this experiment in the revised paper.
The model without binding mechanism fails to learn object-centric representations, as it cannot leverage the additional rotation dimensions. Without the binding mechanism, these dimensions inherently do not have a strong effect on the computations. The binding mechanism ensures that features with similar orientations are processed together, while features with dissimilar orientations are essentially masked out. This allows the network to create separate streams of information that it can process separately, which naturally leads to the emergence of object-centric representations.
To expand on this intuition, imagine the most extreme scenario where a feature is of the opposite orientation to a group of aligned features, as shown on the left-hand side of Figure 3 (cosine similarity = -1). Without the binding mechanism, the misaligned feature would effectively be subtracted from the aligned features, resulting in a smaller output magnitude (as shown for $\left\lVert{\psi}\right\rVert_2$). The binding mechanism reduces this effect and results in a larger output magnitude (as shown for $\mathbf{m}_{\text{bind}}$). Effectively, the binding mechanism masks out the misaligned feature, as the output magnitude would be the same if the misaligned feature was replaced by a zero vector. We will amend our description in the paper to include this intuition.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thanks very much to the authors for these clarifications and additional experiments. The explanations and additional results for points 3-5 helped me to develop a better intuition for the method, and I think they will strengthen the paper. My only remaining question is about whether a comparison with slot attention can be performed for the new experiment (detailed in my reply to the general response). | Summary: This work addresses the problem of unsupervised object discovery. It seeks to remedy some limitations (primarily object storage capacity) of the recently introduced synchrony-based approach, CAE [1], and scales it to more visually complex scenes compared to CAE [1] which was only applied to simple grayscale (Shapes and MNIST) datasets. The key idea behind the proposed method is the extension of the number of feature dimensions used to manipulate “phase” (rotation) values associated with image features from ‘2’ in CAE to ‘n’ dimensions in RF. To control these ‘n’ dimensional rotation values use ‘n’ separate bias terms in every layer. Further, they apply their method on pre-trained DINO features to scale the grouping results to real-world datasets.
Strengths: 1) The problem studied in this paper is well-motivated and important.
2) The method presented is novel and as the model class it uses to perform binding (synchrony-based) has received significantly less attention in the literature compared to slot-based approaches.
3) The paper is very well written.
4) The experiments are well performed and the presented method compares favorably to the considered baselines.
Weaknesses: 1) The authors note that one of their three contributions is a new evaluation method. The phases are weighted but with binary values (0 or 1) which reduces to a difference in the averaging constant (N vs N-k) in the denominator of the weighted averaging operation. The threshold used to compute the weights is just a fixed value (i.e. 0.1 same as in CAE [1]) as opposed to being some learned value based on the magnitudes. So, in practice, how different/novel is this evaluation procedure compared to the one used by CAE to the extent that it can be deemed as a core technical contribution of this work?
2) Depth masks provide a strong supervision signal for discovery of object grouping information. The problem of 2 (or more) objects having the same color being grouped together has been resolved through the use of such depth masks i.e. strong supervision (Figure 6 caption).
3) Lines (244-245): “On FoodSeg103, we limit our evaluation of our model to the MBO_c score, …. ”. MBO_c score only measures semantic grouping not instance-level grouping. The pretrained DINO features already show a high level of specialization to semantic classes (attention maps when conditioning on CLS token). Therefore, this experiment does not really meaningfully test the instance-level grouping ability of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Why do spherical coordinates lead to instabilities? Could not follow the explanation in footnote 1, specifically the point about ordering of the angles and under certain circumstances subsequent angles having no influence on the orientation. Could the authors clarify this point? A visualization would be greatly beneficial in this regard.
2) Lines (242-243) : “On the Pascal VOC dataset, we do not compare ARI-BG scores, as we have found that a simple baseline significantly surpasses previously reported .….” . I didn’t understand the reasoning behind this choice.
3) (related to weakness #2) Why can’t the proposed method segregate 2 objects of the same color but spatially well separated? If the network has learned a simple rule using just a single feature like color or shape to perform binding, that would be extremely limiting in the general case.
4) (related to weakness #2) Could the authors show the grouping performance (FG-ARI, FULL-ARI) for the colorized ‘Shapes’ dataset without (simply using RGB images as inputs) the use of depth channel information? Have the authors performed an ablation to quantify how much the depth channel information assists the network to predict the object identities?
5) How does the proposed method work on colored multi-object datasets like Tetrominoes, Multi-dSprites or CLEVR that are the first benchmark suite typically used for evaluation in the object-centric literature?
6) Table 1 (Pascal VOC dataset results), compared to the baseline models which use pre-trained DINO features in the encoder module (i.e. DINOSAUR Transformer/MLP) how much instance-level grouping has been achieved through the use of RF? It’s known that the DINO features already possess a high-level of specialization to semantic classes and therefore can perform semantic grouping (by inspecting the attention masks after conditioning on the CLS token). In this regard, SlotAttention and SLATE baselines cannot be considered like-for-like since they still use the pixel reconstruction objective and are trained from scratch. Short point being that how much instance-level grouping is being learned through the use of RF on top of the semantic-level grouping that is already captured by the pretrained DINO features?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Authors discuss limitations of their method, in particular that the capacity to represent multiple objects is currently limited to the output space of the autoencoder and that the methods still lag behind slot-based methods such as DINOSAUR.
References:
[1] “Complex-Valued Autoencoders for Object Discovery”, Lowe et. al, TMLR 2022.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive review. We would like to take the opportunity to respond to the questions and concerns that you have posed.
### Weaknesses
**1) How novel is this evaluation procedure compared to the one used by CAE?**
While our proposed evaluation method closely resembles the CAE's evaluation method mathematically - merely requiring the addition of a weighted sum with binary weights - its conceptual significance cannot be understated: it avoids the trivial solutions described in lines 157-160 of the paper that would otherwise make a fair assessment of our approach on multi-channel images impossible.
**2) Depth masks provide a strong supervision signal**
Providing depth information is one possible way to prevent the Rotating Features from grouping together objects of the same color. Alternatively, we show that using features from a pretrained vision transformer works equally well. This approach, utilized by prior work for object discovery [1], is arguably more applicable in practice.
**3) The experiment on FoodSeg103 does not meaningfully test the instance-level grouping ability.**
Unfortunately, the FoodSeg103 dataset, like most semantic segmentation datasets, does not provide instance-level ground truth segmentation masks. This makes it challenging to meaningfully examine the instance-level grouping ability. Nonetheless, we believe that the class-level grouping performance of the Rotating Features on this dataset is noteworthy. For one, the DINO features exhibit a specialization towards the semantic class of a single object per scene when conditioned on the CLS token. In contrast, our experiments assess the performance in extracting multiple objects per scene, without feeding the CLS token into our network. Additionally, the DINO features are only used to set the input magnitudes, with input orientations being set to a fixed value. We evaluate the grouping learned by the output orientations, making our assessment independent of any grouping that may be extractable directly from the DINO features.
### Questions
**1) Why do spherical coordinates lead to instabilities?**
We will replace the example given in the paper with the following one: When a vector's magnitude is zero, the angular coordinates can take any value without changing the underlying vector. As our network applies ReLU activations on the magnitudes, this singularity may occur regularly, hindering the network from training effectively.
**2) “On the Pascal VOC dataset, we do not compare ARI-BG scores”. I didn’t understand the reasoning behind this choice.**
In Appendix D.2, we provide a more in-depth discussion on this.
**3) Why can’t the proposed method segregate 2 objects of the same color but spatially well separated?**
In its current form, Rotating Features do not include any inductive bias to enforce a spatial separation between object representations. We believe that this would be an interesting future direction to explore, as it could provide an alternative to our proposed solution that utilizes higher-level input features to perform binding on complex, real-world inputs.
**4) Could the authors show the grouping performance for the colorized ‘Shapes’ dataset without depth information?**
These results can be found in Figure 6, and Appendix Table 7 and Figure 14. They indicate that the network benefits strongly from the additional depth information.
**5) How does the proposed method work on colored multi-object datasets typically used in the object-centric literature?**
In response to the reviewer's suggestion, we evaluate the performance of the proposed Rotating Features model on the Multi-dSprites and CLEVR datasets, comparing it to its predecessor — the CAE model [2]. The outcomes are presented in the table below. Note that since the original CAE was only applicable to grayscale images, we combine it with our proposed evaluation procedure to make it applicable to multi-channel input images, which we denote as CAE*.
|Dataset|Model|ARI-BG|
|--|--|--|
|Multi-dSprites|CAE*|0.371$\pm$0.056|
||Rotating Features|0.888$\pm$0.015|
|CLEVR|CAE*|0.289$\pm$0.042|
||Rotating Features|0.664$\pm$0.013|
The results indicate that the Rotating Features model significantly surpasses the performance of the CAE*, demonstrating good object separation on these two datasets. However, the results still lack behind the state-of-the-art. The qualitative examples in Figure 3 of the PDF uploaded alongside the general response show that Rotating Features encounter the same issue here, as we have seen with the colored 4Shapes dataset: objects of the same color tend to be grouped together. As demonstrated with the RGB-D version of the colored 4Shapes dataset and with our results using pretrained DINO features on real-world images, this issue can be addressed by using higher-level input features.
**6) How much instance-level grouping is being learned on top of the semantic-level grouping that is already captured by the pretrained DINO features?**
As described above, Rotating Features do not directly make use of the semantic specialization of the DINO features that may be found when conditioning on the CLS token. Nonetheless, we will add another baseline from the DINOSAUR paper [1] that allows for a direct comparison between the grouping that may be inherent to the DINO features and the grouping achieved by the Rotating Features model. This baseline applies k-means directly to the DINO preprocessed features and achieves an MBO$_i$ score of 0.363 and MBO$_c = 0.405$. For comparison, Rotating Features achieve scores of MBO$_i = 0.407$ and MBO$_c = 0.460$. We therefore conclude that the Rotating Features improve over the instance-level and semantic-level grouping that may be inherently present within the pretrained DINO features.
---
[1] Maximilian Seitzer, et al. Bridging the gap to real-world object-centric learning. ICLR, 2023.
[2] Sindy Löwe, et al. Complex-valued autoencoders for object discovery. TMLR, 2022.
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: I thank the authors for their responses to my questions. I still believe the limitations of the paper should be more pronounced in the main text. In particular, the fact that the model relies on the depth masks and the differentiation of semantic vs instance segmentation. In the figures provided in the paper there is not a single case of a figure containing multiple instances of the same class such that the method separates them. This should be made more obvious in the text. I'm happy to stick to my current rating of the paper. | Summary: The paper presents a new approach for extracting objects from distributed representations, based on a binding mechanism called 'rotating features' that extends previous phase-based binding notions to a much higher dimension binding space, and avoids the use of separate slots for individual objects, showing promising results and scaling behavior with both toy and natural-image data sets.
Strengths: The exploration of alternatives to slot-based schemes, so prevalent in transformers and related contemporary architectures, for addressing how neural networks can successfully extract distinct, coherent representations of objects and other entities is an important contribution in itself. For brain science, it is important because the brain is unlikely to have explicit slots of the kind that can be built into artificial networks. For AI, this step may be equally important, as it allows for the possibility of more graded approaches to objecthood that could be important for capturing the kind of graded objecthood of many aspects of the natural world, and of avoiding some of the potential brittleness and arbitrariness of imposing slot-based approaches to structured objects made up of sub-objects.
The model seems to produce fairly impressive results compared to an alternative much more complex transformer models and beats other comparison models as well. The excellent training time, and the possibility of avoiding explicit k-means clustering for segmentation, and the availability of uncertainly maps all seem like desirable properties of the model.
Weaknesses: I have chosen to ask questions rather than express statements about weaknesses because I found that there were important features of the model and of the comparisons that I simply could not fully understand without going to source papers on DINOSAUR and the binding mechanism. I do consider it a weakness of the paper that I was not able to understand these things better, and my rating and confidence would be increased further if these questions were addressed in the rebuttal and revision of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I found it a bit difficult to be sure I was seeing fair comparisons in table 1. If I understand correctly, DINOSAUR MLP is much simpler than the Rotating Features CNN. Is it slot based in some way?
I have found it difficult to understand how the binding mechanism described on p 4 works and I did not feel I got much out of figure 3. perhaps the problem lies in my lack of understanding of the meaning of the superscripts on z_in. Another source of confusion is the statement that the extra input dimensions of x (in line 134) are all set to 0, such that they don't seem to be capable of having any effect. Why are these dimensions then needed? I see that learned bias weights apply to the output of f_w and that there are R^(n x d_out) of these. The references to Lowe et al and Reichert & Serre should not be my only source of an understanding of how the mechanism works.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The limitations as stated seem valid, and I look forward to seeing where attempts to address these limitations will lead.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback from your thoughtful review. We would like to take this opportunity to address the two questions you posed:
**I found it a bit difficult to be sure I was seeing fair comparisons in table 1. If I understand correctly, DINOSAUR MLP is much simpler than the Rotating Features CNN. Is it slot based in some way?**
The DINOSAUR MLP model is a slot-based model, which combines Slot Attention with a spatial broadcast decoder. In essence, it encodes the DINO preprocessed features into slots, and subsequently decodes each slot individually via an MLP decoder. The predicted slot mask from this decoder is then used to recombine the individual reconstructions, and to evaluate the object discovery performance of this model. We will clarify our description in the paper to make the comparison between the models more comprehensible.
**I have found it difficult to understand how the binding mechanism described on p.4 works, and I did not feel I got much out of figure 3. perhaps the problem lies in my lack of understanding of the meaning of the superscripts on $z_{in}$. Another source of confusion is the statement that the extra input dimensions of $\mathbf{x}$ (in line 134) are all set to 0, such that they don't seem to be capable of having any effect. Why are these dimensions then needed? I see that learned bias weights apply to the output of $f_{\mathbf{w}}$ and that there are $\mathbb{R}^{n \times d_{\text{out}}}$ of these.**
The rotating feature vector $z_{in} \in \mathbb{R}^{n \times d_{\text{in}}}$ has $n$ rotating dimensions and $d_{\text{in}}$ feature dimensions, i.e., channels. Our data does not contain any rotating information, so we introduce the additional dimensions by padding the input with zeros. Your observation regarding the biases is correct - they serve as the model's mechanism that allows it to rotate the initial features, and thus to make use of the additional dimensions and to push them away from zero. Essentially, every feature dimension has the capacity to learn a distinct orientation offset through this bias.
Without the binding mechanism, the model would fail to learn to leverage these additional dimensions, as they inherently do not have a strong effect on the computations. The binding mechanism ensures that features with similar orientations are processed together, while features with dissimilar orientations are essentially masked out. This allows the network to create separate streams of information that it can process separately - which naturally leads to the emergence of object-centric representations.
Regarding Figure 3, we agree that the superscripts on $z_{in}$ are ambiguous. We will amend the figure caption, improving notation and description, to the following:
Effect of the binding mechanism. We start by randomly sampling two column vectors $\mathbf{a}, \mathbf{b} \in \mathbb{R}^n$ with $\left\lVert{\mathbf{a}}\right\rVert_2 = \left\lVert{\mathbf{b}}\right\rVert_2 = 1$. Assuming $d_{\text{in}} = 3, d_{\text{out}} = 1$ and $f_{\mathbf{w}}$ is a linear layer, we set $z_{in} = \left[ \mathbf{a}, \mathbf{a}, \mathbf{b} \right]$, weights $\mathbf{w} = \left[ \frac{1}{3}, \frac{1}{3}, \frac{1}{3} \right]^T $and biases $b = \left[0, ..., 0\right]^T$. Then, we plot $m_{bind}$ and $\left\lVert{\psi}\right\rVert_2$ on the y-axis. These denote the magnitudes of the layer's output before the application of the activation function, with (blue) and without (orange) the binding mechanism. Without the binding mechanism, misaligned features are effectively subtracted from the aligned features, resulting in smaller output magnitudes. The binding mechanism masks out features with dissimilar orientations, reducing this effect and leading to consistently larger magnitudes in $m_{bind}$. In the most extreme scenario, features with opposite orientations (i.e., with a cosine similarity of -1) are cancelled out by the binding mechanism, as the output magnitude ($\frac{2}{3}$) would remain the same if $z_{in} = \left[ \mathbf{a}, \mathbf{a}, \mathbf{0} \right]$.
---
Rebuttal Comment 1.1:
Title: Thanks for responses, new experiment strengthens findings further
Comment: I thank the authors for their responses to my questions. The innovative nature of this work makes it exciting and challenging, and the new experiment demonstrating generalization to more objects further underscores the promise of the approach. I certainly continue to believe this paper deserves the attention of the community. While the results may not yet be ground-breaking, the approach and direction certainly are. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback. We are delighted to see the reviewers recognize that the exploration of alternatives to slot-based schemes as studied in our paper is important (e8db, r4Rq) and interesting (cH5o), and that our work may stimulate many interesting directions for future work (cH5o). Further, the reviewers have noted that the experiments are well performed (r4Rq), showing that the proposed Rotating Features achieve promising results (e8db, cH5o, VTkG) while being very efficient to train (e8db, CH5o). Additionally, reviewer r4Rq stated that our paper is very well written.
We want to use this general response to highlight an additional experiment that we have conducted following suggestions of reviewers cH4o and VTkG. This experiment shows that Rotating Features can generalize beyond the number of objects observed during training, even when the number of objects in the test images is unknown.
### Generalization to more objects
We conduct an experiment to assess the flexibility of Rotating Features in segmenting varying numbers of objects, thereby testing its ability to generalize beyond the number of objects observed during training. Additionally, this experiment evaluates the adaptability of the model when the number of objects in a scene is unknown.
To train the Rotating Features model, we make use of a modified version of the 10Shapes dataset. This version includes the same ten unique shapes as the original. However, only six of these shapes are randomly selected to appear in each image.
Post-training, we test the trained model with a range of variants of this dataset, each displaying between four and ten objects per image. We present the results in Figure 1 in the PDF uploaded alongside this general response. First, we observe that when the number of objects is known and $k$ for $k$-means is set accordingly, the performance of the Rotating Features model is best when fewer objects are present in each image, and decreases as more objects are added. However, considering the increase in difficulty when incorporating more objects in a scene of a fixed size, the model maintains a relatively stable performance across various numbers of objects per scene. Second, when the number of objects in a scene is not known, and we apply $k$-means with a fixed value of $k=7$ (corresponding to the number of objects observed during training, plus one for the background), performance degrades considerably the more the true number of objects deviates from the fixed value of $k$. However, this problem can be circumvented by using agglomerative clustering, and by setting the distance threshold on the training dataset to reflect the best performance when there are six objects in a scene. Using the same threshold across all test settings maintains consistent performance for varying numbers of objects in each scene, albeit slightly inferior to the $k$-means baseline, with $k$ representing the true number of objects.
In summary, our results suggest that Rotating Features can generalize beyond the number of objects observed during training, even when the number of objects in the test images is not known. We will include this experiment in the revised paper.
Pdf: /pdf/c83f8e47d9cd2fe04d289046bad59c5cd08ab52d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes | Accept (spotlight) | Summary: It has been hypothesized that the brain builds an internal model of its environment, and uses this model to make inferences and plan actions. This works aims to understand the neural mechanisms that are the basis of such computations. To this end, the authors construct several classes of artificial neural network models, each with different inductive biases, and evaluate them on future state prediction tasks in ethologically relevant environments.
They evaluate each of these models in two ways:
- comparison to human behavior in an object contact prediction task
- comparison to neurophysiological data from macaques playing mental-pong, a ball interception task in which the ball is partially occluded
From these evaluations, they identify that dynamics modules that predict the future state in the latent space of video foundation models reasonably matches both human behavioral patterns and neural responses across diverse environments.
Strengths: Understanding the neural basis of mental simulation is an important problem both in neuroscience and AI.
The approach taken here is to construct several classes of artificial networks and evaluating them on tasks that require prediction of the future state of the environment. This allows the authors to probe which inductive biases are crucial for mental simulation.
The presentation is clear as well.
Weaknesses: Not really a weakness, but details about the best fitting model layers would be useful to add. It would also be interesting to see some analysis comparing the best fitting latent layers across different models.
minor:
- line 17, 18 in the abstract talks about the mental pong task without introducing it
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The R3M and VC-1 video foundation models (+ dynamics) are the best at neural response predictivity. However, one is a ResNet and the latter is a transformer based model. Also the training objectives are different for both models. Why is the performance of these two models similar? Is it the scale and/or type of pre-training dataset?
- For the object-centric models did you try using recurrent networks for the dynamics module?
- Was any analysis performed to compare the best-fitting latent representations across models?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are addressed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *Not really a weakness, but details about the best fitting model layers would be useful to add. It would also be interesting to see some analysis comparing the best fitting latent layers across different models.*
Absolutely agree. We actually compared the latent layers in Figures 2-4 across different models (these are the leftmost bars in each group of the nine image and foundation models classes that we tried). We found that the best latents overall were from the video foundation models (VC-1 and R3M) trained in a self-supervised manner on the Ego4D dataset. The dynamically-equipped versions of these models, optimized to predict the future state of the environment in this latent space, were the best across the neural and behavioral metrics we compared (Figure 4B). For clarity, we will include a plot with just the latent layers alone in Figure S3 of the revised manuscript. Thank you for the suggestion.
- *line 17, 18 in the abstract talks about the mental pong task without introducing it*
Thank you for pointing this out, we will introduce the Mental Pong task in the revised abstract prior to lines 17-18.
- *The R3M and VC-1 video foundation models (+ dynamics) are the best at neural response predictivity. However, one is a ResNet and the latter is a transformer based model. Also the training objectives are different for both models. Why is the performance of these two models similar? Is it the scale and/or type of pre-training dataset?*
This is a great question. We believe it is likely a combination of the self-supervised loss function applied to the Ego4D dataset. For example, both R3M and VIP have the same architecture (ResNet-50) trained on Ego4D, but VIP is trained using an offline RL (goal-conditioned) objective predicts neural responses poorly (Figure 3B, dark green bars) relative to R3M (even at the level of latents), whereas R3M is trained with a contrastive, self-supervised objective. Furthermore, when trained with a categorization objective on ImageNet, ResNet-50 performs poorly (Figure 3B, red bars, “Image Foundation Models”), even at the level of the latents (Figure 3B, leftmost red bar, “Image Foundation Models”). Similarly, though somewhat less directly, Transformers like DeIT, DINO, DINOv2, and CLIP are trained with self-supervised objectives, including contrastive ones, on webscale datasets even larger than ImageNet (“Image Foundation Models” bars in Figure 3B), yet perform worse than the self-supervised VC-1 transformer trained on Ego4D. We will include discussion of these points above in Section 5.2 of the revised manuscript. Thank you for bringing it up.
- *For the object-centric models did you try using recurrent networks for the dynamics module?*
Yes, the object-slot models are trained with a recurrent graph neural network (GNN) dynamics module, originally proposed by Kipf et al., 2020. We apologize for this not being clear in the text, and we will mention this more clearly in Section 3 when the model is first introduced. Thank you for asking this.
- *Was any analysis performed to compare the best-fitting latent representations across models?*
Yes, this is a great question. We actually evaluated the latent layers against our neural and behavioral metrics in Figures 2-4 (refer to the leftmost bars in each group of nine foundation models). The most effective latent representations were from the video foundation models self-supervised on Ego4D (VC-1 and R3M), and were most differentiated on neural responses (new Figure S3, panel C). Their dynamically-equipped versions, optimized to predict the future state of the environment in this latent space, excelled in all neural and behavioral metrics (Figure 4B). We will add a dedicated plot of the latent layers alone in a new Figure S3 in our revised manuscript for better understanding. We appreciate the recommendation.
---
Rebuttal Comment 1.1:
Title: response to rebuttal
Comment: Thank you to the authors for these responses. My questions have largely been addressed and I am happy to keep my rating. | Summary: The authors compare "foundation models" of vision for mental simulation. They consider several large models including models trained on static scenes and dynamic scenes. It was found that the models optimized with self-supervision on dynamic scenes yielded the best neural predictivity.
Strengths: The work is interesting since mental simulation is relatively understudied but important for embodied models. The paper is clearly written.
* The work is thorough in testing out a lot of models and model families.
* Results on neural predictivity is strong for the video foundation models.
Weaknesses: * Since all the video foundation models are trained with Ego4D and the image foundation models are trained with ImageNet, it is not straightforward to disentangle the effect of dataset vs role of dynamics itself.
* The models only differ in terms of neural predictivity while being similar to each other in accuracy and correlation to human responses. Previous work (Rajalingham et al. [2022b]) seem to support that model behavior should be measured in terms of correlation to human responses to disambiguate between models - with a high correlation suggesting that the model is using the same strategy as humans. In light of this, the results presented seem weaker than as claimed.
* I do not see the contributions of the paper beyond "large models trained with dynamic scenes data are better at neural predictivity for a dynamic task" which is not surprising.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: *Fixed encoder that is not trained during "dynamics" training seems too constraining (also unlike human brains). This might have unequal effects on the different models. For example, it might have hurt imagenet models more than others etc. Have you tried versions where you did update the encoders too?
*There seem to be some models (Fig. 3 D) that are very good in accuracy but not so good in neural predictivity. Do you know why this is happening?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *Since all the video foundation models are trained with Ego4D and the image foundation models are trained with ImageNet, it is not straightforward to disentangle the effect of dataset vs role of dynamics itself.*
We wanted to understand this question too, and as a result, we included the latents (without any dynamics) in Figures 2-4 (refer to the leftmost bars in each group). Across all nine classes of image and video foundation models that we tried, the most effective latent layers were from the self-supervised video foundation models trained on Ego4D (VC-1 and R3M), as can be seen in Figure 3. However, the latent representations of VC-1 and R3M alone are significantly worse at predicting neural responses than adding dynamics on top of it (remaining bars in each group in Figure 3B). Furthermore, the dynamically-equipped versions of VC-1 excelled in all neural and behavioral metrics (Figure 4B) relative to their latents and those of other models. Thank you for raising this important point.
- *Previous work (Rajalingham et al. [2022b])…support that model behavior should be measured in terms of correlation to human responses… In light of this, the results presented seem weaker than as claimed.*
Rajalingham et al. 2022b trains and evaluates models in the same Mental-Pong environment; however, we only (*evaluate* our models in this environment since we are after the more challenging problem of task generalization to multiple diverse scenarios (in the Physion and Mental-Pong environments). For example, as we show in Figures 3C and S2, even state-of-the-art models like FitVid (a pixel-wise future predictor) fail to generalize to Pong after being trained in Physion, unlike our dynamically-equipped video foundation models. The reason why we focus on generalization to Mental-Pong is that monkeys can perform these tasks without substantial training, suggesting that they are already equipped with the necessary neural foundations for mental simulation in this environment. Therefore, we aim to also build networks that are not explicitly trained on Mental-Pong itself, but are tasked to generalize to this novel setting as a test of their general understanding of physical scene dynamics -- chiefly developed through three factors: their architecture, optimization objective, and pretraining on a naturalistic environment.
- *I do not see the contributions of the paper beyond "large models trained with dynamic scenes data are better at neural predictivity for a dynamic task" which is not surprising.*
Actually, not all large models trained on videos predict neural data well. In fact, a major point of our paper is that many state of the art machine learning models for future prediction fail to match neural and behavioral data well. For example, of the models trained on dynamic scenes, neither the pixel-wise future predictors nor the object-slot models do this well (“End-to-End models in Figure 3B), nor does the video foundation model VIP or its dynamically equipped version do it well either (green bars in Figure 3B). Our work therefore suggests strong constraints on models, specifically *not* through RL, pixel-wise losses, or bespoke object-slots, but rather through self-supervised pretraining from egocentric views that humans and animals naturally receive. Crucially, doing future prediction on this *reusable* latent representation is important (since the fixed latent alone is not sufficient, see the leftmost bars in Figure 3B in each group).
- *Fixed encoder that is not trained during "dynamics" training seems too constraining (also unlike human brains)....Have you tried versions where you did update the encoders too?*
That’s a great question, and one that we have looked into. In fact, our decision to use a fixed encoder during “dynamics” training, despite seeming restrictive, was largely influenced by our experimental findings. For example, prior state-of-the-art models like end-to-end pixel-wise or object-slot predictors did not yield superior results (cf. Figure 3 “End-to-End” bars) and struggled to generalize to Mental-Pong (cf. Figure 3C dark green FitVid bar, and Figure S2), even though their encoders were updated during training. Based on these observations, we saw that generalization to novel environments failed even when the encoders are allowed to update during training, especially if inductive biases such as the loss function and pretraining dataset are not properly chosen. In fact, this approach aligns with insights from more cognitively-inspired graph neural networks (GNNs), which propose to solve intuitive physics tasks by modeling dynamics on meaningful entities, like objects, their material properties, and their relations, rather than raw pixels. Since GNNs can match the OCP human error patterns well (as shown in Bear et al., 2021), they suggest that one can “factorize” the problem of intuitive physics into solving a challenging vision problem (e.g. an understanding of the scene) with dynamics that operates on that representation. These observations together motivated our fixed foundation model encoder with dynamics approach.
Going forward, resources permitting, we are planning to explore hybrid models that train the encoder with a masked autoencoding (MAE) style objective applied to next-frame prediction on Ego4D and larger datasets like CortexBench. We believe these methods could better leverage temporal relationships to learn reusable *and* object-based latent representations.
- *There seem to be some models (Fig. 3 D) that are very good in accuracy but not so good in neural predictivity. Do you know why this is happening?*
Ball position & velocity offer coarser measures than neural predictivity, with far fewer tracked quantities (4 vs. 1889). Thus, many inductive biases are able to track 4 quantities, but far less can track 1889. Our goal is models that excel in **both** metrics, as shown with the dynamically-equipped *video* foundation models in Figure 3D (top right circles).
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response!
> Rajalingham et al. 2022b trains ............... chiefly developed through three factors: their architecture, optimization objective, and pretraining on a naturalistic environment.
To clarify my question - I am questioning why the results from Fig 2 B do not support your claims that video foundation models are better - the image models are as good. Rajalingham et al. 2022b supports that correlation to human response is what we should look at to adjudicate between models but here the results seems to suggest no superiority of the video models. Am I right in saying this?
I am cognizant of the breadth of this work but I was just expecting stronger results. I do see however that for neural data predictivity, the video models are better. I think many of my earlier reservations were addressed and I am compelled to increase my score.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer tmZS
Comment: Thank you for your consideration of our comments, and for your engagement, we appreciate it!
- *I am questioning why the results from Fig 2 B do not support your claims that video foundation models are better - the image models are as good. Rajalingham et al. 2022b supports that correlation to human response is what we should look at to adjudicate between models but here the results seems to suggest no superiority of the video models. Am I right in saying this?*
The behavioral data that Rajalingham et al. 2022a,b refers to is tracking of the Pong ball's position, which is judged in humans based on eye tracking data. Therefore, the most relevant data is Figures 3C & D, rather than Figure 2B -- we will make this connection clearer in the revision, thank you for mentioning it. These figures evaluate how well the models track the ball in the Pong environment. As you can see in Figure 3C, the video foundation models that best match DMFC neural dynamics (from Figure 3B) while the macaque plays Pong also best track the ball, approaching DMFC's ability to do so (rightmost red and blue bars approaching the grey horizontal line in Figure 3C). This correspondence between predicting neural response dynamics and tracking the ball is quantified in Figure 3D, and you can see the video foundation models do those both best (top right circles in Figure 3D).
It is also worth noting that our human behavioral responses in Figure 2 are a completely different set of judgements from tracking the Pong ball. Specifically, these are object contact predictions in naturalistic 3D environments. Our main point with Figure 2B is to show that the pixel-wise models subtley "overfit" to the environmental statistics in which they are trained, and fail to generalize to the novel environment of Pong -- a hallmark of both human and macaque intuitive physics, as they can learn the Pong task almost immediately. In other words, the OCP task in Figure 2 involves held-out videos but in the *same* training environment as the models. Whereas, in Figure 3, Pong is a completely new environment, and we see that pixel-wise models (especially FitVid, dark green bars in Figure 3C) fail to track the ball. This failure is visualized in Figure S2, where the ball is held fixed in FitVid's simulation. Furthermore, in part due to the novelty of the Pong environment relative to the models and also having to track neural *dynamics* rather than a static yes/no judgement as in OCP, the neural data more strongly differentiates individual models than the human behavioral responses do (Figure 3B vs. Figure 2B). However, despite this, here again video foundation models (specifically the dynamically-equipped VC-1 models) match *both* human error patterns on OCP and neural responses in Pong reasonably well compared to all other models (shown in the blue circles in Figure 4B). | Summary: The paper compares a rather large variety of DL models that are able to “future predict” environmental states, including pixel-based deep networks, compositional approaches (e.g., slot-wise processing objects), as well as image and video foundation models. In the latter case, the latent space of the foundation model is used to future predict. The models are then evaluated by comparing their prediction accuracy in an object contact prediction (OCP) task with human performance and in a mental pong task with macaque neuro-biological data.
Strengths: The paper offers a huge data study that reveals model differences when predicting human behavioral performance and macaque neural dynamics data. At the moment, video-based foundation models yield best (but not very good and not much better compared to other models) performance. Generally, the generated data is interesting and should not be lost.
Weaknesses: On the negative side, the paper focuses on only two tasks, which are not very well-motivated. I have found neither a clear motivation to choose these tasks nor a clear hypothesis what the authors would expect.
Further, the results that have been achieved are very far from human-to-human consistency / human accuracy and also very far from inter-animal consistency. This implies to me that the results and the performance differences between the models are not really due to actual systematic differences between the models and the model relations to how humans process the data; rather, they may be due to more fundamental design choices. Moreover, particularly in the OCP task, the model performance differences nearly seem random.
Surprising to me is the fact that object-centric models yield worst performance (but all are bad), although the task itself is suitable for an object-centric setup. Maybe the authors did not tap onto the appropriate latent state representation – or it this the case because they do not allow to actually simulate dynamics with the object-centric setup?
This concern goes hand-in-hand with the fact that the authors do not report an oracle-like performance in the OCP task (as they do in the mental-Pong dataset). From the CogSci work of Tenenbaum, Goodman and others it is well-known that humans do use an internal simulator to approximately solve such tasks… none of the systems the authors consider are able to actually run an internal simulation of the potentially unfolding dynamics. Rather, the authors seem to tap directly into the static latent state of the respective models.
In the mental-pong task, the authors do provide an oracle comparison, which indeed yields the best neural predictivity performance, followed by VC-1 and R3M, both of which can learn to predict ball position and velocity because they learn to predict video dynamics where entities, for example, fall, fly, and roll around. Thus, I do not really see any insight created by these comparisons, except for that a model is needed that predicts ball positions and velocities… in the end, the oracle should be the baseline, which needs to be beaten by a neural network to come closer to understanding how our brain / macaque brains solve this task.
Seeing this huge study, I am also quite concerned about the compute time (energy) invested here. The authors do not provide enough information about training etc. Thus, it is impossible to even estimate this investment.
The final discussion stays rather superficial on the actual insights gained (because unfortunately there are no significant insights really). The mere task to match human data / macaque data even better may be interesting for some sub-community along the neuro-DL interface, but I miss both cognitive and structural insights that go beyond the fact that object dynamics (including velocities) as well as object constellations need to be encoded in the considered tasks.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: A few more detailed comments:
The abstract starts very superficial – it took me a while to understand what you are actually after. A full re-write seems necessary.
In the introduction, the second paragraph emphasizes that “predicting the physical dynamics of environments” is critical… but isn’t this precisely what you emphasize in paragraph 1 of the introduction, that is, that mental models are important? Only mental (intuitive physics-approximating) models allow to predict physical dynamics, isn’t this per definition the case?
Over-general implications are drawn in the introduction – the results are not strong enough in my opinion to come to the put-forward conclusions. Moreover, the written implications stay on a rather superficial level, although the restuls are very particular to the considered two tasks and the methods employed.
Finally, on the methods side, more details would have been useful to compare sizes and understand the exact training procedure to fit the data.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: not addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *…the paper focuses on only two tasks, which are not very well-motivated.*
We thank the reviewer for suggesting the need for this clarification, and will add it to the Introduction. Specifically, the OCP task (Bear et al. 2021) tests realistic simulations of a wide range of everyday physical phenomena, including rigid and soft-body collisions, stable multi-object configurations, rolling, sliding, and projectile motion, in a realistic 3D environment with 2,000 scenarios per condition, and is therefore more comprehensive than previous behavioral benchmarks so far. Furthermore, Mental Pong is the *first neural* dataset that was shown to involve mental simulation, and is a dense neurophysiological dataset containing almost 2,000 neurons recorded with high temporal precision (Rajalingham et al. 2022b). They are therefore both high-throughput in behavior as OCP is (from 100 human subjects across 16,000 scenarios), or in the neural readout. In addition to these tasks, we benchmark a large variety of models (41 models total), none of which are strawman models – they can functionally perform the task on large-scale datasets and many of them are considered state-of-the-art. Yet, when compared our human behavioral and neural benchmarks, in part due to the high-throughput nature of our benchmarks, they strongly differentiate these functionally reasonable hypotheses.
- *Further, the results that have been achieved are very far from…consistency*
This is actually a major point of our work. Our work does not claim to have solved intuitive physics, but first shows the limitations of prevalent approaches and offers promising future directions validated against strong neural and behavioral benchmarks. Even though we utilize state-of-the-art machine learning models, they still significantly lag behind primate intuitive physics. As seen in Figures 2C and 3D, aligning with human behavior and neural dynamics directly pertains to solving the AI task in each environment. Notably, the widely-used pixel-wise future predictor excels in familiar settings but overfits in novel ones, like the Pong scenario. This underscores our emphasis on out-of-distribution structural generalizations, rather than the typical ML generalization where training and testing environments are statistically similar.
Moreover, our work *also* points a concrete way forward to better models. In particular, we find that latent future prediction appears to be the most promising paradigm overall, compared to popular alternatives of pixel-wise future prediction, object slots, and the fixed latents alone of foundation models. In other words, the problem of intuitive physics can effectively be “factorized” into a challenging vision problem and then equipping that scene representation with dynamics. And perhaps more crucially, this visual representation needs to be reusable across dynamic scenes. This suggests that reusability is an important factor that future models should incorporate, and points to a move away from the commonly used web-scale imagesets (no matter their scale) that take snapshots with lighting conditions that tend to be unrealistic from the standpoint of egocentric viewpoints humans and animals receive. Furthermore, the dynamics architecture could benefit from better leveraging a more object-based representation of temporally-active state variables (as suggested by the high neural predictivity of the joint, ground truth position + velocity oracle; rightmost bar in Figure 3B). Taken together, these observations suggest that working on self-supervised loss functions that better leverage temporal relationships to learn object-based, video foundation models are a next step, coupled with latent future predicting dynamics on top of these reusable, object-based latent representations. One way toward that goal can be adapting the masked objective in MAE to operate on the future frame rather than the current frame, and train on a more diverse video dataset similar to CortexBench, rather than Ego4D alone.
- *do they not allow to actually simulate dynamics with the object-centric setup?*
Our object-slot models use a learned graph-neural network module on top to simulate the forward dynamics on top of the object-slot latent representation.
- *the authors do not report an oracle-like performance in the OCP task (as they do in the mental-Pong dataset).*
The CogSci work, as we discussed in Section 2, provides a near-oracle model for the OCP task using ground-truth simulator states, unlike real-world sensory inputs. We will indicate their numerical performance in the revision. With ground-truth inputs that these CogSci models take in, such as in Mental-Pong, the dynamics prediction problem becomes trivial since there is a single object, so our focus is on the much more challenging problem of bridging the gap from raw sensory inputs to behavior.
- *none of the systems the authors consider are able to actually run an internal simulation of the potentially unfolding dynamics.*
Actually, the dynamics are additionally trained on top of the latent representation, even in the cases when the visual encoder is fixed. We in fact train multiple dynamics architectures (CTRNN and LSTM) across all foundation models, on both Physion and the larger Kinetics 700 dataset.
- *In the mental-pong task… I do not really see any insight created by these comparisons, since the oracle should be the baseline.*
The oracle models are not baselines since they possess perfect information inaccessible to animals or pixel-based models, as the ball is occluded. Oracles reveal that using more object-centric priors, which access both ball position & velocity, outperforms relying on just one of these. Taken with Fig. 4B, these indicate that future models should learn reusable *and* object-based representations from egocentric videos.
- *Limitations: not addressed*
The Limitations section was in the initial submission, in the first paragraph of Supplementary Material.
---
Rebuttal Comment 1.1:
Title: still many open questions - but important things have been clarified - thank you!
Comment: From the general rebuttal reply by the authors, I am now much more convinced that this paper actually does have some merit, which I have overlooked in my review. I think what put me off was the huge analysis and the lack of a clear hypothesis. As the authors write in their general rebuttal, I highly recommend expanding on these aspects – particularly in Section 2. Additionally, the abstract and intro can be improved. I recommend getting to the point faster – specify exactly which data you are considering in your modeling work (behavioral and neural data from humans and primates). Then put forward your hypothesis and/or the main conclusions that you draw.
Thereby, I am still confused why the slot-based models do not work well at all while you are suggesting that more “factorized representations of object state dynamics” will be important. Shouldn’t slot-based models produce such factorized representations?
I also recommend not to emphasize the size of your data or the throughput overly much, but rather the intention you have in using it. In fact, could you explain to me (and probably many readers, I also asked some colleagues) what you actually mean by “dense” neurophysiological data? You highlight it also, for example, in the limitations and the abstract as an important factor to constrain your models. Similar, I am honestly not quite sure what you mean with “high-throughput human behavioral readouts”.
Please also note that the need for out-of-distribution structural generalization is well-known in cognitive modeling (e.g., Lewandowsky, S., & Farrell, S. (2011). Computational modeling in cognition: Principles and practice. Sage Publications).
I highly recommend discussing the large gap between human-to-human consistency and model-to-human consistency further --- this was one of my main concerns and still makes me hesitate to fully recommend publication. Are the models any good at all? Are model differences in terms of model-to-human correlation actually meaningful? I am still not quite sure.
Please excuse that I had overlooked the limitation section in the appendix. I still find it rather short, though. With respect to the rather weak model comparison levels reaches, maybe a comparison to similar modeling approaches – say on the level of primary visual areas – would help? That is, a comparison about the levels such models are reaching and the level that you are reaching (compared to the human-to-human consistency)? The gap could tell a story about how much is missing.
Thank you for clarifying the oracle value in the pong task – the ball is not visible behind the occluder – all considered models fail to track the ball, correct? Maybe a reference to the fact that babies are able to track the ball at least with 3 months of age if not earlier would be warranted. --- which again somewhat points out to me that the study is to a certain extent an overkill.
Nonetheless, I fully agree to the insights and to the messages that you want to convey, which you have made now much clearer in the rebuttal. Seeing that comparisons like the one presented are en vogue, this work and the insights are timely and will gain recognition (and citations).
I thus increase my score to weak accept.
---
Reply to Comment 1.1.1:
Title: Reply Part 1 of 2 to Reviewer 6vWo Comment
Comment: Thank you for your thorough reading of our response, and for taking it into consideration for raising your score, we greatly appreciate it!
We will definitely implement the changes you suggest, they are very helpful. We will expand on the insights gained by our analysis in Section 2, and specify the data and hypotheses more clearly in the Abstract and Introduction, alongside their conclusions. We will be sure to cite **Lewandowsky, S., & Farrell, S. (2011). Computational modeling in cognition: Principles and practice. Sage Publications**, thank you for pointing us to it.
**Given the character limitations, we have split up our reply in two parts. What follows is Part 1 of 2 of our reply.**
- *Thereby, I am still confused why the slot-based models do not work well at all while you are suggesting that more “factorized representations of object state dynamics” will be important. Shouldn’t slot-based models produce such factorized representations?*
This is a great question, and a point that we want to emphasize more in our revision. The fixed object slots are an example of an object-centric model that is *not* very reusable on novel dynamic scenes. As demonstrated by our dynamically-equipped video foundation models, our work demonstrates that reusability is an important feature of the most successful models on our benchmarks. Therefore, future models could be object-centric so long as they do *not* sacrifice reusability – since as we show in the case of the fixed object-slot models, being object-centric *alone* is not sufficient to guarantee generalization to novel scenes. One idea that might aid in reusability for this model class to better leverage large video datasets like Ego4D, is to have more dynamically updated object slots (such as in material type or number), though more work would need to be done to properly identify what these update rules should be. Our initial results in this domain is precisely why we are working towards more object-based, video foundation models. We will clarify this in Section 6 in relation to these models since this is an important point we do not want to be missed – thank you for bringing it up.
- *In fact, could you explain to me...what you actually mean by “dense” neurophysiological data?...not quite sure what you mean with “high-throughput human behavioral readouts”.*
Thank you for asking this clarification, and we apologize for the lack of clarity in using these terms. We will define these terms in the revised Introduction. The terms “dense” and “high-throughput” are synonymous and refer to the fact that the number of comparisons being made in the neural and behavioral data is large (in the thousands). For example, our Mental-Pong neurophysiological dataset is “dense” because it contains almost 2,000 neurons recorded with high temporal precision, so our models not only have to match each of the neurons well, but also across timepoints and 40 held-out conditions (randomized ball positions). In a similar vein, the OCP task (Bear et al. 2021) tests simulations of a wide range of everyday physical phenomena, including rigid and soft-body collisions, stable multi-object configurations, rolling, sliding, and projectile motion, in a realistic 3D environment with 2,000 scenarios per condition, and is therefore more comprehensive than previous behavioral benchmarks so far. Thus, our neural and behavioral datasets are either high-throughput in behavioral comparisons as OCP is (from 100 human subjects across 16,000 scenarios), or in the neural readout (2,000 neurons, held-out timepoints and ball position conditions).
**Part 2 of 2 of our reply is continued below.** | Summary: This manuscript compared several classes of deep-learning based sensory-cognitive models in their ability to predict human behavior and monkey neural responses in tasks that require reasoning about physical relationships based on visual inputs. They find that the models that match best to **neural data** are the ones trained to predict future states of environments in the latent space of pre-trained foundation models that are themselves optimized for dynamic scenes in self-supervised way. Within these models, the ones that are trained on diverse ranges of tasks are the best. On the other hand, the models that matched **human behavior** best were the models with pixel-wise end-to-end prediction for future scenes trained on the same dataset.
Overall, I think this is a timely work to help point out the gap of many self-supervised foundation models from the computation potentially being performed in the brain. It emphasizes the fact that pixel-wise prediction overemphasizes details thus losing the ability of extracting useful representation as animal brains can extract. The relative distance from human and neural data across models is useful for AI researchers to explore better models along the "gradient" in model space.
Strengths: The models being compared are comprehensive and represent the states of the art.
The metrics being used are reasonable (although with limitation, see my comment for weakness). I appreciate that the authors used correlation of prediction on single stimuli (for human judgement).
The environments that the models are evaluated on focus on physical understanding, which is an important gap for the current AI (e.g., chatGPT and many models lack intuitive physics)
The datasets used for pretraining foundation models also come from a diverse range, from ImageNet to Ego4D and kinetics, especially Ego4D is an ecologically relevant dataset of egocentric videos.
Weaknesses: After reading, although I have obtained a good understanding of the relative performance of the models in predicting physical contact and a somewhat good picture of the relative similarity to human behavioral judgement or monkey neural data in another game (mental-pong), I still lack a good understanding of where the model perform worse and how they are different from the brain. Sure we see the accuracy is lower as in Fig 2A and they are not correlated well enough to human judgement (Fig 2B, sorry that I am describing it as a half-empty bottle), but readers like me are interested in, at least for the best model, how they are worse than humans: is there any pattern among the stimuli that the models turn to make a wrong judgment? Maybe this question is hard to answer as we may lack a good representational space to start with in which we can say the models perform poorly in certain regions, but maybe some examples to put in supplementary material, or some subjective summary from the authors' observation can be helpful, or a more fine-grained comparison between different scenarios (are some scenarios easier for networks? do networks perform more similarly to the brain in some scenarios). Fig2C may provides some hint that a model with better accuracy to start with is more correlated with human, but it still does not answer anything about the pattern of response of a particular model. The analogy of the desire for this aspect of analysis is what psychophysics has done with visual illusions: by analyzing what stimuli cause illusions (a mistake made by the brain) and what models reproduce illusions, we could come up with hypotheses of the computational principle of the brain.
I think some of the metrics being used might potentially mask certain difference between model and human/neural data. For example, the fine-grain comparison of predicted hit-probability in the Physion Object Contact Prediction (OCP) task is based on Pearson correlation. But correlation may not capture the overall variance of prediction. The predicted probabilities across stimuli by a model may exhibit a much bigger or much smaller variance than those of humans while showing the same level of correlation. Models and humans may also exhibit different levels of average probabilities across stimuli which is also not captured by correlation. I think such information as mean variance of prediction may also help understand in what aspects these models are worse than or different from humans.
I do have some worry that the two tasks tested on human and monkeys reflect different aspects of reasoning. The mental-pong task as displayed does not require any 3D reasoning, while the Physion task, depending on the particular stimuli, should require 3D reasoning to some degrees. I worry that the different patterns of performance between Figure 2 and 3 may partly result from this dataset difference.
On the other hand, there is another confound, which is the end-to-end models are used differently in Figure 2 and 3. They were trained and tested on the same environments (but novel scenarios) for Figure 2 but tested on a different environment for Figure 3 (the authors did comment about this second confound in Lines 302-307).
Given the two confounds here, I think the results in Figure 3 are more conclusive than Figure 2 since none of the models being compared were trained on the mental-pong task (the authors did emphasize the result for neural data more than that for the human data in the abstract, which I think is a fair thing to do). But if the authors want to draw some conclusion about model generality (a model trained on a diverse range performs better than those end-to-end models when tested on new tasks), then it seems that an additional comparison in which the end-to-end models are trained on a different dataset than Physion but tested on Physion would be better to be added to Figure 2.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Maybe I am missing it, but I cannot find whether the fine-grain comparison with human or neural data were performed on held-out data or training data, such as in Fig 2B and 3B,C.
Related to this, I infer from Line 304 that SVF and FitVid were fitted to the same "scenario" of Physion as the scenario being tested on, but Line 301 mentioned they were tested on novel scenario. It might help clarify what exactly “scenario” means, especially given that the same word scenario is also overloaded with the meaning of the 8 different scenarios in the Physion environment.
I understand that there is an urge and publication pressure to make claims such as xx model matches human/neural data. But to me, Fig 2B and 3BC all suggest a non-neglectable gap between the networks and the brain. It is this gap that is more inspiring for future research to fill. I suggest changing some subtitle such as 6. to be a relative statement (e.g., "provide a better match") instead of a binary statement of "can match".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes. Limitations are well addressed. I want to suggest one additional limitation: a fair comparison between different model architecture/objectives would be training the end-to-end models also on the same data (Ego4D) as those foundation models before the evaluation. Perhaps one reason this cannot be done is due to the limitation of computational resource, which is not the fault of the authors and should not be considered as a negative factor for the decision of this paper, if this is indeed the reason.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *is there any pattern among the stimuli that the models turn to make a wrong judgment? …maybe some examples to put in supplementary material, or some subjective summary from the authors' observation can be helpful, or a more fine-grained comparison between different scenarios (are some scenarios easier for networks?*
One of the main reasons we used correlation-based measures is that 1) they reflect measures used in many prior studies (especially e.g. Bear et al. 2021 where the OCP task is introduced), 2) to be able to compare a large class of models and identify trends across them, and 3) enable comparisons across multiple datasets to understand differences between them (Figure 4A) and promising trends common across them (Figure 4B). Nonetheless, this is a great suggestion, and definitely something that is feasible with the metrics we currently use. In particular, as the reviewer suggests, for the OCP task, we plotted the distribution of per-scenario matches to human error patterns across models. We found a clear pattern emerged, namely that “Drape”, involving a soft material (cloth) draping over other objects by virtue of their shape and the cloth’s material, is generally the hardest scenario for models, whereas “Support”, which involves stacks of objects that may fall over, is the scenario that best matches human error patterns. This is interesting because it suggests that predicting diverse material properties, beyond standard rigid objects (especially those of soft bodies), is a tangible goal for future improved models. We will include this as an additional Figure 2D in the revised manuscript. We also will release our model checkpoints and metrics upon acceptance, so that others may do further analyses on our models. Thank you for making this suggestion.
- *I think some of the metrics being used might potentially mask certain difference between model and human/neural data….But correlation may not capture the overall variance of prediction….I think such information as mean variance of prediction may also help understand in what aspects these models are worse than or different from humans.*
The correlation based measures reflect measures used in prior studies, like Bear et al. 2021 where OCP is introduced, and enable the identification of trends across large classes of models and stimuli on multiple datasets. Moreover, mean variance of prediction would still be computed across the probabilities for each stimuli (e.g. $(1/d) \sum_{i=1}^{d} (x_i - y_i)^2$, where d is the number of stimuli), so we are not sure if it would provide more insight in the direction the reviewer is asking for than correlation. Instead, what might be more in line with what the reviewer is suggesting would be looking across individual classes of stimuli. To this end, we have looked at one such intuitive grouping based on scenario and looking at the distribution of human error pattern matching across stimuli in these scenarios across models, in a new Figure 2D. We found that models struggle most with the “Drape” scenario, where cloth drapes over objects due to shape and material, and align best with human errors in the “Support” scenario involving potentially collapsing stacks of objects. This indicates a concrete target for future models, which should aim to better predict diverse material properties, especially for soft bodies. While beyond the scope of our present study, which is more focused on identifying trends of successful models as a starting point for understanding human and animal intuitive physics, there may be other groupings across stimuli that could reveal additional insight; therefore, we will release our model checkpoints upon acceptance so that others may study further measures.
- *I worry that the different patterns of performance between Figure 2 and 3 may partly result from this dataset difference.*
Both humans and animals can perform Mental-Pong (as shown in Rajalingham et al., 2022a & b), and humans and macaques display 3D reasoning that ensures their survival in the physical world. These general abilities across environments fall under the umbrella of “intuitive physics”. Therefore, while we agree with the reviewer that we expect different environments to potentially recruit different aspects of these abilities (as we quantify in Figure 4A), we want to identify models that can better match **both** fine-grained measures of human and animal intuitive physics well, which we do in Figure 4B in finding that the dynamically-equipped video foundation models can most reasonably do this across model classes. We will emphasize this important point more in Section 6 of the revised manuscript, where Figure 4 is discussed. Thank you for mentioning it.
- *...end-to-end models...were trained and tested on the same environments (but novel scenarios) for Figure 2 but tested on a different environment for Figure 3 (the authors did comment about this second confound in Lines 302-307).*
The latent future prediction dynamics of all the foundation models were trained on Physion just as the end-to-end models were, and those Physion trained dynamics were evaluated against neural and behavioral data, ultimately outperforming the end-to-end Physion models. Despite our interest, training end-to-end models on datasets larger than Physion exceeds our current computational resources, as evidenced by models like FitVid requiring nearly a month on eight A100 GPUs. To somewhat get around this constraint, we trained the best models, the dynamically equipped video foundation models, on Kinetics 700. This made the encoder and dynamics training datasets different from Physion used in the OCP metrics in Figure 2 (6 hatched bars). The top models, VC-1 and R3M, yielded comparable results to their Physion-trained versions in both human accuracy and neural predictivity. This indicates dataset scale isn’t the sole factor; inductive biases also play a key role. We'll discuss this further in Sec. 4 & Limitations in revision.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the reply!
The finding that Drape is most difficult to model is really interesting! I don't know enough of the field but I would be super interested to learn if the order found in figure 1 of the newly submitted pdf resembles any sequence in which children learn about these scenarios (either in terms of showing interest or having control).
Anyhow, I really hope to see this paper published!
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer ZwMf Comment
Comment: Thank you for the kind words! Although we don’t make any particular claims about childhood development (rather about adults), your suggestion is a very interesting one. We are not aware of any studies that specifically point to the progression of understanding about rigid vs. deformable objects. However, for example, one study that may be relevant is **Needham, A., & Baillargeon, R. (1998). Effects of prior experience on 4.5-month old infants’ object segregation. Infant behavior and development, 21(1), 1-24**. This paper discusses how infants’ prior interactions with objects (in this case, a box and a cylinder) shape their understanding of whether they are separate or unified in subsequent experiments. This might suggest that the more complex the behavior of an object (e.g., deformability), the more experiences may be needed to understand it – though of course testing this more directly in children with deformable objects would be of great interest! We will definitely mention it in the revised Discussion. | Rebuttal 1:
Rebuttal: **Global Response:**
We thank the reviewers for their thorough reviews, helpful suggestions, and overall positive enthusiasm about our work.
**For common reference, the core contributions of our work are:**
**1. Dense neurophysiological data strongly constrains hypotheses:** Overall, we find that structural generalization to novel environments and matching dense neurophysiological data to be a strong constraint on models of physical simulation, and many state-of-the-art machine learning models fail to do both of these (cf. Figures 2B, 3B, 3C, and 4A). Yet, monkeys can perform these tasks without substantial training, suggesting that they are already equipped with the necessary neural foundations for mental simulation.
**2. Latent future prediction as a promising paradigm for mental simulation:** More specifically, mental simulation in primates (humans and monkeys) appears to be primarily relevant to *dynamics* that are trained to predict the future state of the environment in a suitable latent space (cf. dynamically-equipped video foundation models in Figures 3B and 4B).
**3. Pre-training dataset scale is not the only factor:** In particular, this latent space is highly constrained, and dataset scale is *not* simply all you need (cf. Figure S1 for within-architecture neural predictivity comparisons across pretraining datasets of high variation).
**4. Not every latent works:** In fact, this latent space does *not* appear to consist of bespoke object slots or prioritize fine-grained details (e.g. at the level of pixels), or even through supervised tasks on static images (Figure 3B, “End-to-End” and “Image Foundation Models” bars).
**5. Reusable latents on dynamic scenes as neural models:** Rather, the latent mainly has to be *reusable* across *dynamic* scenes. Taken together, our results observe a correspondence between the ability to predict neural and behavioral responses for the mental simulation phenomena, and developing useful representations for Embodied AI more generally (Figure 4B). This is in contrast to the prior emphasis on classic computer vision tasks such as classification, segmentation, etc (cf. low neural predictivity of “Image Foundation Models” bars in Figures 3 and 4B) either optimized in a supervised or self-supervised manner, but which have up until now been standard models of the primate visual cortex (cf. Schrimpf et al., 2018).
**Computational Resources:**
Reviewers appreciated our thorough study, which we deemed essential to have many functionally reasonable instantiations of these hypotheses and to have the neural and behavioral data strongly separate these hypotheses. The resources used are listed in Supplementary Section A in the initial submission. For all models except FitVid and SVG 128 × 128 (SVG trained on 128 × 128 pixel images), a single NVIDIA A100 GPU was sufficient to train it. Our study suggests that resource-intensive models like FitVid aren't the best for intuitive physics as they don't generalize well (Figure S2). Dataset scale isn't the sole key; inductive biases, as seen in latent future predictions, are crucial, and merely training on larger imagesets, even 250 times bigger than ImageNet, doesn't ensure matching neural dynamics (Figure 3B). We'll release model weights post-publication for use.
**Based on reviewer comments, here we list the major changes we plan to make to our submission if it is accepted:**
**1. Text changes:** With the extra page granted for the camera-ready version, we will enhance the Introduction (Section 1) and Discussion (Section 7) to highlight our paper’s core message: most state-of-the-art machine learning models do *not* meet our rigorous neural and behavioral benchmarks for future prediction. Specifically, many models, including those trained on dynamic scenes like pixel-wise predictors and object-slot models, fall short, as illustrated in Figure 3B. Instead, our findings point towards models that future predict latent representations honed through self-supervised pretraining from egocentric viewpoints that humans and animals naturally receive, as opposed to with RL objectives (VIP bars, Figures 3B and 4B) or webscale imagesets (“Image Foundation Models”). Furthermore, it is pivotal to have trained dynamics on these latent representations, as the fixed latents alone are not adequate (compare the leftmost bars in Figure 3B to the remaining bars in each group, for instance).
Additionally, Section 2 will emphasize a bit more how prior CogSci results involving graph neural networks (GNNs) that operate on semantic, non-pixel-level features actually motivates our latent future prediction paradigm which factorizes vision & dynamics, while Sections 5.2 and 6 will further stress the importance of model proficiency across multiple metrics.
**2. Additional Figures:** Based on Reviewer ZwMf’s excellent suggestion, we include an additional Figure 2D, to show a more detailed error pattern that consistently emerges across models, revealing that they best match human error patterns overall on “Support” scenarios (along with other rigid body scenarios), and could be most improved on the “Drape” scenario. This suggests a concrete future direction of improving models to predict diverse material properties, in order to better handle soft body interactions.
Also, based on Reviewers 6vWo and bK8w feedback, we are introducing a new Supplementary Figure S3 to specifically highlight latent layers across various models, which were originally included in Figures 2-4 (leftmost bars in each group). The best fixed latents were generally from self-supervised video foundation models, like VC-1 and R3M, which then when dynamically-equipped, excelled in predicting the environment's future state across our metrics.
**Both new Figures 2D and S3 are included in the one page PDF we have uploaded here.**
**Finally, we reply to each reviewer individually. Due to character limitations, we respond to the major comments of each reviewer.**
Pdf: /pdf/ae766ebb5056d47f4171191fe08ee9e7ee6423e2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion | Accept (poster) | Summary: This paper proposes **RS-Del** -- a novel certified defense that provides guarantees w.r.t. insertion, deletion, and substitution of bytes within a variable length input. As its name indicates, RS-Del is based on randomized smoothing. However, unlike classic randomized smoothing, the authors cannot certify robustness using the Neyman-Pearson lemma and instead propose a novel certification scheme.
Strengths: The paper has a lot to like. Overall the writing quality was good. Ideas were structured and explained well. The writing was clear.
Certified and robust training literature is dominated by works centered on some variant of an $\ell_p$ threat model. It is rare to see a paper that prevents as novel of a certified threat model as this one. Reading a paper with such an innovative threat model is refreshing and welcome.
* I would further argue that some in the adversarial ML community do not appreciate how myopically the research community focuses on narrow definitions of robustness. The paper's "Related Work" section does a good job of explaining this point. I appreciated reading the author's perspective in this regard.
The most closely related work I am aware of is Saha et al. (2023) (which the authors cite). I find RS-Del's threat model and type of guarantee much more useful and innovative. Having looked at both papers, I find this work substantially stronger and more compelling.
Weaknesses: There are a lot of things I like about the paper. That notwithstanding, this paper feels more appropriate for a security venue than an ML venue. Understanding malware and the meaningfulness of guaranteeing robustness up to $m$ bytes is generally beyond the expertise of most ML readers (and reviewers). I believe this paper would be a better fit and better appreciated at a venue like S&P or Usenix than at NeurIPS. The misalignment with the venue slightly reduced my score.
### Empirical Evaluation
To the extent of my knowledge, no other method provides guarantees w.r.t. to Levenshtein distance. The authors select randomized ablation (RA) [Levine and Feizi 2020] as the primary baseline for Hamming distance. Levine and Feizi's RA is far from the state-of-the-art $\ell_0$ certified method at this point. Two works consistently outperform vanilla RA:
[1] Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, and Neil Zhenqiang Gong. "Almost Tight $\ell_0$-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations". ICLR 2022. https://openreview.net/forum?id=gJLEXy3ySpu.
[2] Zayd Hammoudeh and Daniel Lowd. "Feature Partition Aggregation: A Fast Certified Defense Against a Union of Sparse Adversarial Attacks". arXiv 2302.11628. https://arxiv.org/abs/2302.11628
Jia et al. is based on RA and provides tight(er) certification analysis. Hammoudeh and Lowd use model ensembles. This paper does not cite either of these papers (Hammoudeh and Lowd is a newer preprint so I understand why the authors may not have seen it). However, at minimum, Jia et al.'s version of RA must be a baseline of comparison.
* I would have voted "Borderline Accept" with the paper as is had at least Jia et al. (2022) been used as a baseline. Updating the baselines is a necessary condition for me to increase my score. I cannot recommend accepting this paper without at least Jia et al. (2022) as a baseline. I strongly recommend adding both baselines or at minimum a compelling explanation in the rebuttal why one is not possible.
### Interpreting the Empirical Results
The authors specify their certified guarantees in terms of the number of bytes. I have two primary concerns about this framing.
1. Existing certified methods also specify their guarantees in absolute terms. For example, RA provides median guarantees on CIFAR10 of 7 pixels. Providing guarantees in absolute terms makes sense when instances have a fixed size; its trivial to convert that result into a relative quantity (e.g., 1% of pixels). When test instances have variable sizes as in the case here, no simple conversion exists. For a reader to be able to appreciate how meaningful a guarantee of 128 bytes is, we need to understand the typical range of malware sizes. At an ML venue, such information needs to be provided in the main paper. Moreover, it is the author(s)'s responsibility to explain how the size of the guarantee changes with malware size.
2. Providing certified guarantees as a fraction of malware size is only a small part of the equation. I expect that most ML readers do not know a priori the extent of changes that must be made to a program to induce a change of 128 bytes. For example, would simply recompiling the program with no code changes and just compiler setting changes be sufficient to change the binary by 128 bytes? I do not have expertise in that area, and the authors do not educate the reader in this regard. I consider this choice to be a significant oversight.
### Interpreting the Empirical Results
Certified $\ell_0$ method randomized ablation (RA) is used as the primary Hamming distance baseline. However, the authors provide little to no explanation of how RA works. I think this is important so a reader not familiar with RA can appreciate what is being tested in the experiments. Moreover, the authors need to explain how their method differs from RA when restricted to exclusively the substitution setting.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See my comments above under "Weaknesses".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: I understand why the authors placed Table 1 where they did (put simply -- space). However, the table's explanation does not appear until the next page. When I first saw the table, I did not understand the table and thought perhaps I had missed something. I think separating the tables on separate pages would be a better choice. To prevent a significant increase in length, NeurIPS allows text to wrap around inline figures and tables.
The authors could provide more intuitions about why their setting needs turnable decision thresholds $\eta_y$ while most other randomized smoothing applications do not. Tying this explanation to their case study/expected end use would make the explanation much more compelling.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and constructive feedback.
### RE: fit for NeurIPS
We believe our paper is aligned for NeurIPS (see below), and note that this was not a concern raised by other reviewers. Our paper advances certified robustness for generic sequence classifiers, providing both algorithmic and theoretical contributions for a previously unexplored threat model. These are contributions the machine learning community has historically been interested in—by our count 10 papers on certified robustness appeared at NeurIPS last year. We view malware as a compelling domain to test our method, given growing reliance of malware detection methods on machine learning and the domain's real threat of evasion attacks. Our empirical evaluation focuses on certification not malware defense.
### RE: baselines for Hamming distance
Thank you for suggesting Jia et al. (2022) and Hammoudeh & Lowd (2023) as alternative baselines and for offering to raise your score. We discuss them separately below.
**Jia et al. (2022):**
After receiving this review, we implemented Jia et al.'s method and incorporated it into the certified accuracy plot (updated Figure 2 in the **rebuttal PDF**). We find the certified accuracy curve for Jia et al. (denoted RS-Jia) is identical to the curve for the existing baseline by Levine & Feizi (denoted RS-Abn). This is not surprising given Jia et al.'s certification method for top-k prediction is very similar to Levine & Feizi's method when instantiated for binary classification ($k = 1$). More specifically, Jia et al. use the same ablation smoothing mechanism, and while they do derive a tighter certificate, the difference is negligible in a high dimensional setting. To explain why the difference is negligible, we note that Jia et al.'s tighter certificate comes from a tighter lower bound on the smoothed classifier's confidence score. In particular, they tighten the lower bound by rounding up to the nearest integer multiple of $q = 1 / {d \choose r}$ (where $d$ is the length of the sequence and $r = \lceil p_\mathrm{abn} d \rceil$ is the number of ablated elements). The difference between the original lower bound and the tighter one is no larger than $q \leq 10^{-3}$ in our experiments, since $d \geq 10^{3}$ for the sequences we consider and $p_\mathrm{abs} \leq 0.999$. Thus in the worst case, the difference is of a similar order to the resolution of the Monte Carlo estimator ($1/4000 = 2.4 \times 10^{-4}$), which explains why the tighter bound does not have a discernible impact.
**Hammoudeh & Lowd (2023):**
This work looks interesting as it is not based on randomized smoothing like the other baselines for Hamming distance. We will add a citation to it, however adding it as a baseline would require non-trivial changes for two reasons.
First, the formulation is fundamentally different: Hammoudeh & Lowd assume all inputs have the same dimensionality (Section 2 of their paper), whereas the dimensionality of our inputs vary. Second, the current code base would require significant refactoring to work with large datasets, as it loads the entire training set and test set in memory. This is infeasible for our datasets which are hundreds of GB in size.
### RE: absolute versus relative certified radius
Our certificates are independent of sequence length (see Table 1). This means the _absolute_ certified radius is better aligned with the theory—it allows us to compare radii without artificially introducing a dependence on sequence length. That being said, we agree that _relative_ certified radius is useful as a complementary metric. In our original submission, we reported the median relative certified radius (referred to as the normalized certified radius) in Table 7 of Appendix E.1. In our revision, we have updated Figures 4 and 5 to include a plot of certified accuracy versus relative radius, alongside the existing plot for absolute radius (see **rebuttal PDF**). We will endeavour to include such results for relative radius in the main paper.
### RE: interpretation of edit distance
Whether an edit distance can be regarded as "small" or "large" depends on the application and threat model. For our application to malware, we report the median edit distance induced by several attacks in Table 9. The distance varies from 10's of bytes to millions of bytes, depending on the attack. We appreciate and agree with the reviewer's comments on the need to assist the reader; we will include a discussion of interpreting edit distances in malware in terms of cited/tested attacks, file sizes, and with reference to the relative certified radii from the previous point.
### RE: background on randomized ablation
Thank you for this suggestion. We will add a brief description of randomized ablation (RS-Abn) in Section 5, noting the following differences from RS-Del: (1) RS-Abn performs substitution with a special "masking" value whereas RS-Del performs deletion; (2) the number of elements to edit is fixed for RS-Abn but follows a binomial distribution for RS-Del; (3) RS-Abn provides a Hamming distance certificate, whereas RS-Del provides a generalized edit distance certificate. We will also provide a more detailed summary of RS-Abn in a new appendix, where we will also provide an explicit comparison with RS-Del for the Hamming distance setting.
---
Rebuttal Comment 1.1:
Title: Partial Reply
Comment: I do not have time today to provide a full reply, but I wanted to provide some partial feedback in the expectation of writing more in the coming days. The authors are most welcome to provide a response to this partial feedback in the meantime.
> Fit at NeurIPS
I agree that certified methods have a clear place at NeurIPS; there was never a question of that. However, this paper certifies what I consider is a particularly specialized type of certified robustness targeted primarily at one application -- malware (the paper even contains a malware case study). That's not necessarily a bad thing; as I said in my review, "the paper has a lot to like."
Let's restrict discussion here to the malware case, which I claim is the paper's strongest motivation and biggest focus. Intuitively, your method provides some robustness guarantee $r$. I could be wrong, but I find it unlikely that the vast majority of folks in the NeurIPS community possess the a priori knowledge to assess whether RS-Del's certified guarantees are meaningful in the malware space; such an assessment requires nitty-gritty knowledge of compiled binaries, which I don't expect most to know. I gave an example in my review, where only the malware's compiler settings are changed with no code changes (other examples are possible where just the code is refactored or reordered). In such cases, will a malware's compiled binary, in general, change by more than your median $r$? I admittedly do not know, but I lean towards the total changes far exceeding RS-Del's guaranteed $r$ (I could be wrong). Perhaps the other reviewers know the answer to this question, with me the odd one out, but I would be surprised. You may disagree, but I think it is a fair general rule that if most reviewers and relevant readers cannot assess the semantic meaningfulness of your empirical results, there is some misalignment with the venue.
There are ways to mitigate what I contend is (limited) venue misalignment. If the vast majority of the venue's target audience lacks critical background knowledge, the paper needs to provide it (it's not a perfect solution but better than alternatives). Your rebuttal promises that "*we will include a discussion of interpreting edit distances in malware in terms of cited/tested attacks, file sizes, and with reference to the relative certified radii from the previous point.*" I think that's great. Still, I hope your reply can provide significant specificity here on those details so, at minimum, I can use that information in my assessment of the paper and its empirical results.
> Jia et al. (2022) Baseline
Thank you for adding this baseline. I would have a priori expected a (much) larger gap. Would it be possible to share the updated source code for verification? I would like to run the experiment and verify the implementation of Jia et al.'s version, including the hyperparameters. Of course, I affirm the code will not be used for anything other than reviewing and will be deleted at the end of the review period. I believe authors can provide anonymized links to the ACs, who can then share it with the reviewers.
> Hammoudeh & Lowd (2023):
Thank you for looking into this. Your explanation makes sense given the limited rebuttal time.
---
Reply to Comment 1.1.1:
Comment: ### Interpreting edit distance
We can certify edit distance radii up to 128 bytes without significant loss in accuracy. This corresponds to relative radii 0–9% (as a fraction of file size). Our certificates cover real attacks against malware detectors: attacks in Demetrio et al. (2019) and Nisi et al. (2021) change up to 58 bytes in a file's header, and Lucas et al. (2021, 2023) change as little as 1% of a file. These attacks make iterative localized edits (e.g., editing individual bytes or machine instructions) and are therefore well aligned with the edit distance threat model. Other attacks (e.g., Demetrio et al., 2021) make larger changes in edit distance, outside the threat model.
This is no different from other domains where certification is applied, such as vision, where adopted threat models (e.g., $\ell_p$ perturbations) cover a limited set of possible attacks.
For further perspective on edit distance, we can consider rule-based classification tools widely used for malware analysis in industry. For example YARA is a rule-based tool supported by VirusTotal.
Running Nextron System's public YARA rule set on a sample of binaries from VTFeed, we find 83% of matching rules are sensitive to fewer than 128 bytes. This implies most rules can be evaded by perturbations covered by our certified radii, further demonstrating relevance. This is unsurprising given manual rules are typically sensitive to a small byte patterns (function names, file paths, keys, urls, etc.).
**References**
* Lucas et al. "Adversarial Training for Raw-Binary Malware Classifiers." USENIX Security '23.
* Demetrio et al. "Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries." ITASEC '19.
* Nisi et al. "Lost in the Loader: The Many Faces of the Windows PE File Format." RAID '21.
* Lucas et al. "Malware Makeover: Breaking ML-Based Static Analysis by Modifying Executable Bytes." AsiaCCS '21.
* Demetrio et al. "Functionality-Preserving Black-Box Optimization of Adversarial Windows Malware." In IEEE Transactions on Information Forensics and Security 16 (2021), 3469–3478
### Jia et al. (2022) baseline
We are happy to share our code and we've asked for permission from the AC/SAC (thanks for your separate reply on that request).
As an alternative, we would like to further explain why the baselines by Jia et al. and Levine & Feizi perform almost identically in our setting with _high dimensional inputs and binary classes_. In our previous response, we focused on the binary setting, however the empirical improvements reported by Jia et al. hold for the multiclass setting. There's a key difference between these settings as we'll now explain.
Consider the statistical estimation of the lower/upper bounds on the classifier's confidence scores. Jia et al. estimate both bounds simultaneously using a method called SimuEM proposed in their ICLR'20 paper. SimuEM yields a tighter estimate than Levine & Feizi's estimate when there are multiple classes, however _the estimates are equivalent when there are only two classes_ (as is the case in our experiments). This means the only difference between Jia et al. and Levine & Feizi in the binary setting is rounding of lower/upper bounds to integer multiples of $q = 1 / {d \choose r}$. We showed in our previous response that rounding has a negligible impact: it tightens the bound by no more than $10^{-3}$ (but sometimes much less than this) when $d \geq 10^3$.
If the reviewer wishes to interpret Jia et al.'s empirical results in our binary setting, then the above analysis shows that _only the effect of rounding is relevant, not the effect of SimuEM (since it reduces to Levine & Feizi's method for two classes)_. Fortunately, Jia et al. report an ablation study in Tables 1 & 2 of their paper that isolates the impact of rounding and SimuEM. Specifically, they consider the following combinations:
| Label in Tables 1 & 2 | Statistical estimation method | Rounding lower/upper bound |
|--|--|--|
| Levine & Feizi (2019) | Clopper-Pearson for top score | No |
| Levine & Feizi (2019) + SimuEM (Jia et al. 2020) | SimuEM | No |
| "Our method" (referring to Jia et al. 2022) | SimuEM | Yes |
To examine the effect of rounding only (without SimuEM) we can compare the rows labeled "Levine & Feizi (2019) + SimuEM (Jia et al. 2020)" and "Our method". **The certified accuracies reported in these rows are identical for CIFAR-10 and ImageNet at all levels of ablation ($r$)**. This suggests that rounding to an integer multiple of $q$ has no discernible benefit, and that the improved performance of Jia et al. is due to SimuEM.
In summary, we have demonstrated that Jia et al.'s results are consistent with our findings in a high-dimensional binary setting. We only expect Jia et al.'s method to outperform Levine & Feizi's in a multiclass setting where SimuEM makes a difference.
We thank the reviewer for previously offering to increase their score following our evaluation of Jia et al. | Summary: This paper tackles the issue of applying randomized smoothing to discrete sequences under the Levenshtein edit distance. Because the underlying sequence is discrete, it necessitates new mathematical approaches to proving the robustness. Enticingly, the edit distance is bounded by employing only the delete operation, making implementation and application far easier than would have otherwise been possible. Noting the inequity in attack risk for malware/security applications, effective bias terms $\eta$ are added to allow asymmetric certificates of robustness. Since no prior work has obtained this result, they compare against a prior hamming certificate, and show significant improvement over prior results in the smaller Hamming space.
The application to malware is very apt, but this is also relevant to all the domains of which the Levenshtein edit distance is applicable: genomics, time series analysis, epidemiology, linguistics/NLP, etc.
Strengths: 1. This is allowing results in a whole new category of model/hypothesis spaces for randomized smoothing.
2. The results show significant improvement as the maximum radius is reached over closest-prior alternatives
3. The method is easy to implement
4. Real-world consideration to asymmetric attack profile is considered and incorporated into the approach, while not firefighting the multi-class support.
5. The paper is beautifully written and the best I have read in reviewing over probably the past 2 years.
Weaknesses: 1. The only weakness to the paper is some missing references. The asymmetric nature of benign vs malicious attacks was previously noted in "Non-Negative Networks Against Adversarial Attacks" and "Adversarially Robust Malware Detection Using Monotonic Classification", and the two should be cited accordingly. Notably, both only deal with the additive threat model $O = [ \texttt{ins} ]$, filling in the broader contribution of this work in completing the Levenshtine operation set.
2. If I had to add a second quibble, I would ask the authors to caution the reader about applying the % certification to other datasets, because each dataset will differ and industrial-scale classification datasets are beyond the scope of proving the mechanism's correctness. (I could see industry research getting weird reviews from academics who don't understand this when attempting to publish case-studies).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I honestly can't think of any questions to ask beyond random directions of obviously future work. The appendix is detailed, has code, and the paper is exceptionally well written.
To the AC, please note that my shorter review isn't a remark against thoroughness in reading the work. It just answered all the questions I would have asked. This work is of exceptional quality, presentation, and impact.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: This work has no realistic limitations beyond not "solving" all problems in one go. Other challenges of randomized smoothing naturaly remain, and can't be held against the authors given the work's goals and how well it has achieved them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing such encouraging feedback. We are pleased the reviewer recognized the novelty of our work in extending randomized smoothing to new threat models for discrete domains. We also appreciate the reviewer's pragmatic assessment of the limitations.
### W1: Missing references
Thank you for identifying references on the asymmetry of misclassification costs for malware detection by Incer et al. (2018) and Fleshman et al. (2018). We have added citations to both references in Remark 1. While we are not the first to point out this asymmetry, to the best of our knowledge we are the first to consider asymmetric certified robustness for classification.
### W2: Caution about applying certification results to other datasets
Thanks for making the point that experiments cannot prove the correctness of the robustness guarantee (which is covered by the analysis in Section 4). While the experiments support the effectiveness of our approach, we understand the reviewer's concern that readers may falsely assume that the results generalize to other datasets. In our revision, we have identified this as a limitation in the final paragraph of Section 7.
---
Rebuttal Comment 1.1:
Comment: >asymmetric certified robustness for classification.
https://arxiv.org/abs/2302.01961
I believe this pre-print has the same goal, but is in a very different space of models. I don't think being first is relevant to your contribution here. That we can obtain Levenshtein robust bounds using only deletion is highly surprising.
All my concerns are satisfied. I leave my score as-is. You really should go talk with some bioinformatics folks about these results, there are a lot of potential inferences one could make in gene expression classification and potentially verifiably identifiable interactions by bounding the edit distance. At least, that's what my friends in such places mention as a challenge toward their work.
---
Reply to Comment 1.1.1:
Comment: > https://arxiv.org/abs/2302.01961
Thanks for the pointer to this preprint. We will cite it as related work on asymmetric certification, noting that it covers a different family of classifiers (Lipschitz feature map composed with a convex function) and a different threat model ($\ell_p$). We agree with the reviewer that it's not related to our primary contribution—extending certification for the edit distance threat model.
Thanks also for suggesting applications in bioinformatics. We agree that it seems to be a natural fit for our work and certainly worthwhile exploring. | Summary: The paper proposes a general randomized smoothing approach for certifying robustness concerning arbitrary perturbations defined in Levenshtein distance. The critical challenges of proposing the randomized smoothing approach are
1) how to design the smoothing distribution? The paper uses a deletion distribution by randomly removing tokens in the inputs.
2) how to derive the bound? The paper extensively derives a loose bound using the Neyman-Pearson lemma on the proposed deletion distribution.
The approach is effective on malware detection datasets. The paper also proposes an interesting tuning mechanism to favor false positives over false negatives, as in the malware detection scenario, the escaping of malware is a bigger problem than false alarms on benign software.
Strengths: 1. The paper proposes a general randomized smoothing approach for certifying robustness concerning arbitrary perturbations defined in Levenshtein distance. The approach is effective on malware detection datasets.
2. The paper also proposes an interesting tuning mechanism to favor false positives over false negatives, as in the malware detection scenario, the escaping of malware is a bigger problem than false alarms on benign software.
3. The paper also evaluates RS-Del to empirical attacks.
Weaknesses: 1. It is astonishing to see the classifier maintains a high certified accuracy when >90% of the inputs are deleted. I don't think it is possible to do so in NLP datasets, e.g., movie reviews datasets like SST2 and IMDB. Deleting >90% of the movie reviews definitely destroy the meaning of the reviews.
2. missing related works:
* In lines 40-42, the paper states "there is also no work for ... along with substitution or additive perturbation". This is not the case. ARC [1] is a deterministic approach for certifying robustness of LSTMs given arbitrary perturbation spaces, including insertion, deletion, substitution, and their combinations. However, ARC can only be used for LSTMs and the certified radius is much smaller compared to RS-Del. ARC also scales linearly with respect to the certified radius while RS-Del's time complexity is constant, e.g, equal to the number of sample in Monte Carlo sampling. ARC may introduce more over-approximation (e.g. looser bound) than RS-Del due to the interval bound propagation (IBP) used in ARC.
* Masking [3,4] has been used as a randomized smoothing approach for certifying the robustness of NLP models with respect to word substitutions.
3. In Table 10, RS-Del does not perform better than NS on Slack-VTFeed and GAMMA-Sleipnir2. So it is "four out of six" instead of "five out of six". However, the NS baseline is quite weak, how about comparing RS-Del to other empirical defense approaches for malware detection?
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Comments:
1. $m$ never appears outside Lemma 4. Either removing $m$ from Lemma 4 or providing intuitive relation between $m$ and $p_{del}^{|\bar{x}|-|x|}$.
2. In line 286, Table 7->Table 8.
Questions:
1. Can we further improve the bound following Lee et al. [2]? In [2], the bound is to first prove that $\forall h \in \mathcal{F}(x, \mu_y)$ the minimal value of $p_y(\bar{x};h)$ will always be achieved when $dist(\bar{x},x)=r$, e.g., when the attacker tries to utilize the attack budget as much as possible. Then they can exactly compute Eq (13) without approximation. The case in [2] is easier since they only allow substitutions, but in this paper deletions and insertions are also allowed. However, in the proof of Theorem 7, the minimizer is achieved when $n_{ins}=n_{del}=0$, indicating that the possibility of tightening the bound as [2].
2. Is Corollary 6 a looser bound, e.g, LHS $\le$ RHS instead of LHS = RHS? The proof seems to assume the substituted ones and inserted ones won't be counted in the LCS, but they potentially can be.
3. In line 45, the paper states "we consider input sequences of bounded and varying length". What's "unbounded length"? It seems all inputs have bounded length.
4. What's the performance of $f_b$?
5. However, the NS baseline is quite weak, how about comparing RS-Del to other empirical defense approaches for malware detection?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper addresses some of the limitations. For other limitations, please refer to my points in Weakness and Questions.
[1] Certified Robustness to Programmable Transformations in LSTMs. Yuhao Zhang, Aws Albarghouthi, and Loris D'Antoni
[2] Guang-He Lee, Yang Yuan, Shiyu Chang, and Tommi Jaakkola. Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers.
[3] Certified Robustness to Text Adversarial Attacks by Randomized [MASK]. Jiehang Zeng, Xiaoqing Zheng, Jianhan Xu, Linyang Li, Liping Yuan, Xuanjing Huang
[4] Randomized Smoothing with Masked Inference for Adversarially Robust Text Classifications. Han Cheol Moon, Shafiq Joty, Ruochen Zhao, Megh Thakkar, Xu Chi
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for thoroughly engaging with our work and providing detailed feedback.
### W1: High deletion >90% doesn't harm accuracy
We offer an explanation in Appendix E.1 (lines 922-929). In short, it's important to realize that a deletion probability of 90% does not mean 90% of the sequence elements are inaccessible to RS-Del. Rather, 90% of the sequence elements (on average) are inaccessible for a given Monte Carlo sample. When RS-Del is run using 4000 Monte Carlo samples, each sequence element is accessed $4000 \times (1 - 0.9) = 40$ times in expectation.
### W2: Related work
Thank you for drawing our attention to these papers. In our revision, we have cited Zhang et al. (2021) as a rare example of work that goes beyond the substitution threat model. However, we agree with the reviewer's characterization of its limitations. In particular, the perturbation spaces that can be certified in practice are quite limited (e.g., deletion of up to 2 stop words). We have also added citations to Zeng et al. (2023) and Moon et al. (2023) as examples of work covering the synonym substitution threat model (complementing refs [28] and [80] in our paper).
### W3: Interpretation of Table 10
Our claim that RS-Del achieves the lowest attack success rate (ASR) for "five out of six attacks" holds for each dataset individually. In our revision, we have adopted the reviewer's suggestion and only count a "win" for RS-Del if it achieves the lowest ASR on _both_ datasets.
### C1: Removing $m$ from Lemma 4
We think it's important to introduce the bijection $m$ in Lemma 4, as it's used immediately after Lemma 4 to rewrite terms that appear in the smoothed classifier's confidence score. In particular, $m$ specifies which edits to $\mathbf{\bar{x}}$ can be expressed in terms of edits to $\mathbf{x}$ without changing the summand $s$ (up to a proportionality constant).
### C2: Incorrect reference
Thanks, we have fixed this.
### Q1: Improving the bound following Lee et al. (2019)
Lee et al. provide a generic framework for computing robustness certificates for randomized smoothing based on the Neyman-Pearson lemma. We previously considered their framework and determined it would be computationally infeasible for our mechanism/threat model. To explain why, consider the first step in their framework: computing a pointwise certificate that guarantees the smoothed classifier's prediction at $\mathbf{x}$ does not change at a neighboring input $\mathbf{\bar{x}}$. This requires partitioning the support of the deletion mechanism into regions, such that the relative likelihood of perturbing $\mathbf{x}$ and $\mathbf{\bar{x}}$ to any point in the region is constant. For our mechanism, the relative likelihood does not simplify in general and takes $O(2^{\max \{|\mathbf{x}|, |\mathbf{\bar{x}}|\}})$ time to evaluate in the worst case. This makes the first step of their framework infeasible, let alone the second step, where one must search for the worst-case pointwise certificate in the edit-distance ball.
### Q2: Inequality or equality in Corollary 6
We believe we’ve found the source of confusion. The expression for the lower bound $\tilde{\rho}(\mathbf{x}, \mathbf{\bar{x}}, \mu_y)$ in eqn (14) conceals a dependence on the LCS ($\mathbf{z}^\star$). In Corollary 6, we instantiate $\tilde{\rho}(\mathbf{x}, \mathbf{\bar{x}}, \mu_y)$ for a _particular_ LCS to obtain a loose lower bound on $\rho(\mathbf{\bar{x}}, \mathbf{x}, \mu_y)$ (this is fine since the bound holds for _any_ LCS). The LCS we choose is defined in terms of the cost-minimizing edit path from $\mathbf{\bar{x}}$ to $\mathbf{x}$, consisting of $n_\mathrm{sub}$ substitutions, $n_\mathrm{del}$ deletions and $n_\mathrm{ins}$ insertions. Specifically, it is the sequence obtained from $\mathbf{x}$ by deleting the $n_\mathrm{sub}$ elements substituted in $\mathbf{x}$ and deleting the $n_\mathrm{ins}$ elements inserted in $\mathbf{x}$ (see Appendix B.4). This means the equality in Corollary 6 is correct, provided one understands that the LHS depends on the LCS. To avoid confusion, we have made the dependence on the LCS ($\mathbf{z}^\star$) explicit (in our revision) by using $\tilde{\rho}(\mathbf{\bar{x}}, \mathbf{x}, \mu_y, \mathbf{z}^\star)$ in eqn (14) and propagating the change through eqn (15) and Corollary 6.
### Q3: Meaning of “unbounded length” sequence
We used this phrase to emphasize that our method does not place any limits on sequence length—e.g., we do not require that sequences are padded to a common length. However, upon reflection, we think "variable length" is sufficient to capture this meaning. We have therefore removed "unbounded length" in our revision.
### Q4: Performance of the base classifier
In this work (and prior work on randomized smoothing) the base classifier is not trained to perform well as a standalone classifier. Rather, it is trained to function as a component of the smoothed classifier, where it encounters inputs transformed by the smoothing mechanism. It would therefore be unusual to conduct a standalone evaluation of the base classifier on natural inputs.
### Q5: Comparing with other empirical defenses for malware detection
While the malware literature would benefit from a comparison of empirical defenses, we feel it would best be presented in a separate paper. The focus of this paper is on advancing certification for sequence classifiers; we have developed algorithmic innovations with our randomized deletion mechanism, and corresponding theory to prove certifications. This explains the focus on certification in our experiments, where we include Levine & Feizi (2020) (and Jia et al. (2022) in our revision) as the closest certified baselines we are aware of. The purpose of the empirical robustness experiments is not to compare against empirical defenses, but rather to explore the magnitude of common attacks (in edit distance) and assess robustness beyond radii we can certify.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: For W1, I understand the discussion in Appendix E.1, but the discussion does not make sense if it is not supported by Q4. "each sequence element is accessed about 40 times in expectation", intuitively, if only 40 results out of 4000 are correct, then it is still not a majority of them, making $p_A$ (percentage of predictions of the correct label) very small. So I still think the discussion is not convincing.
Overall, I think that this paper should be accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you for the helpful discussion. If there's more needing to be shared still, for either the reviewer/AC, please let us know. | Summary: This paper aimed to design a certified defense for discrete sequence classifiers against edit distance-bounded adversaries. This method exploited randomized smoothing mechanism to consturct the defense and proposed RS-Del to confer robustness against adversarial delection, insertion and substitution edits.
Strengths: 1. Different from most prior work, this paper focused on protecting models with discrete inputs (e.g., binary executables, source codes and PDF files), which was interesting and was meaningful in the real world.
2. The instructions for the proposed methodology were relatively clear (including the explanations of some theorems and lemmas).
Weaknesses: 1. Although the form of the input data was different, the defenseive mechanisms for continuous fixed-dimensional inputs (such as the methods in lines 31 to 35) may also work, but this did not seem to be adequately presented. Could the authors conduct some discussion or even comparative experiments?
2. The abbreviations needed to be supplemented with the full name when they appear for the first time (e.g., RS-Del).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have stated the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our work and appreciating our contribution to certification for discrete modalities.
We respond to specific feedback below.
### W1: Why can't defense mechanisms for continuous fixed-dimensional inputs be used
A key reason why certified defenses for continuous fixed-dimensional inputs cannot be used, is because the certificates
they produce are ill-defined in our setting.
To see why, consider the most common $\ell_p$ certificate, which guarantees robustness for any additive perturbation
$\mathbf{\delta} \in \mathbb{R}^d$ whose $\ell_p$-norm is smaller than some specified value $r$.
If we try to apply this guarantee in our setting, it says that we can add a real-valued vector to a byte sequence
without changing the classifier's prediction.
However, it doesn't make sense to add a real-valued vector such as $\mathbf{\delta} = [-0.89, 0.97, -0.94, 0.70]$
to a byte sequence such as $\mathbf{x} = [78, 7\mathrm{a}, 2\mathrm{d}, 6\mathrm{a}]$.
Separate from this issue, is the fact that an $\ell_p$ certificate only covers inputs that are the same length,
which is not very useful if inputs can vary in length.
These points are partly covered in lines 89–93 of our submission; we will add the above concrete example to improve clarity.
### W2: Spelling out abbreviations when they first appear
Thanks, we have carefully reviewed the text to address this issue.
---
Rebuttal Comment 1.1:
Title: Comment to the rebuttal
Comment: Dear authors,
Thanks for your response. Your response addresses my concerns. Thus, I am willing to give weak accept score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their positive reception of our paper and for providing us with constructive feedback.
We would like to draw the reviewers' attention to the attached rebuttal PDF, which contains updated figures/results in response to Reviewer MXYp. For the benefit of the other reviewers, we briefly summarize the contents below:
* Figure 2 contains a new baseline for the Hamming distance threat model by Jia et al. (2022) (denoted RS-Jia). It uses a tighter certificate compared to the existing baseline by Levine & Feizi (2020) (denoted RS-Abn), however we find the improvement is negligible in our high-dimensional setting.
* Figures 4 and 5 have been expanded to include a plot of certified accuracy as a function of normalized radius (radius divided by sequence length). This was prompted by Reviewer MXYp's suggestion to report how the size of the guarantee compares to the sequence length (which varies for each input). The new plots complement the existing plots for absolute radius in Figures 4 and 5, and the existing column in Table 7 containing the median normalized certified radius. For further discussion of the normalized radius, we refer to our third response to Reviewer MXYp.
### References
* Jia, J., Wang, B., Cao, X., Liu, H., & Gong, N. Z. "Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations." ICLR'22.
* Levine, A., & Feizi, S. "Robustness certificates for sparse adversarial attacks by randomized ablation." AAAI'20.
Pdf: /pdf/28111ac5c02cfd5a303f253fc19dc57d430122f5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Analysis of Variance of Multiple Causal Networks | Accept (poster) | Summary: This paper proposes a single structural model to simultaneously construct multiple networks so as to identify causalities varying across multiple cohorts while identifying stable ones. Each causal network is represented via a directed cyclic graph, and the authors propose an analysis of variance (ANOVA) algorithm (NetANOVA) to identify causalities that are different across networks and an SVD based correspondence analysis to identify the key drivers and responders. Theoretical properties of the algorithm for DCG construction are established and simulations and a data example illustrate the working of their methodology.
Strengths: The paper deals with an important problem and appears to be a novel contribution as the problem of investigating causality with multiple networks as done here, has not been studied in the literature. It is presented in a very clear manner and is easy to read in most parts. Some technical aspects could be clarified further (as mentioned under Questions below). The simplicity of their framework; theoretical guarantees and computational scalability are the main strengths in my opinion.
Weaknesses: The svd based technique employed for correspondence analysis to detect key responder drivers pairs, is not clear. It would be helpful if the authors could elaborate further on this while also providing the intuition behind this. Bootstrapping is used to achieve this- what sort of a bootstrap method is used and why?
Intuitively, I would expect the accuracy of inferred DCGs to depend on the range of cohort sizes (min of n^k's to max of n^k's) with lower accuracy as the range increases. Is this the case? If yes, is it evident from the theoretical results?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please answer questions mentioned under weaknesses above.
Section 5: why is error variance of 0.1^2 a good choice? Can you provide the corresponding signal-to-noise ratio?
Section 3.2: Would it help in interpretation to normalize the coefficient of cause as well so that it lies in (0,1] as R^2?
Please check the paper for typos (e.g. p8, line 257: n_2, n_3 instead?; p5, line 156: responders; p2, line 54: endogenous etc).
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I did not find a discussion of limitations of the method. Please include a brief discussion of these in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: “The svd based technique employed for correspondence analysis to detect key responder drivers pairs, is not clear. It would be helpful if the authors could elaborate further on this while also providing the intuition behind this. Bootstrapping is used to achieve this- what sort of a bootstrap method is used and why?”
It is intimidating to capture key information on vast information from multiple large networks. So we have proposed the correspondence analysis to reveal important clusters of drivers, responders, or even driver-responder pairs for their similarity/dissimilarity across causal networks. For the possible causal effect of node j on node i, its difference between cohort k and cohort l is estimated as hat{γ}_{ij}^{(k)}-hat{γ} _{ij}^{(l)}, however, this value may be data-sensitive and not stable. Therefore, we take advantage of bootstrap results to estimate its standard deviation and further standardize the difference to construct the matrix Z^{(k,l)} for SVD. With the top singular vectors catching the major variations cross responders/drivers, the correspondence analysis will reveal important clusters.
For bootstrap, we took nonparametric bootstrap by randomly sampling the data with replacement, because it is difficult to develop asymptotic distributions on the estimated parameters for the high-dimensional networks.
“Intuitively, I would expect the accuracy of inferred DCGs to depend on the range of cohort sizes (min of n^k's to max of n^k's) with lower accuracy as the range increases. Is this the case? If yes, is it evident from the theoretical results?”
It is a nice point to make. Yes, with fixed K cohorts, the smallest cohort size, which is denoted as n_min, and the sum of all the cohort sizes, which is denoted as n, determine the accuracy of inferred networks and deviated causal effects. As shown in Theorem 4.1, the error bound is inversely proportional to n and is also proportionally to g_n which, as shown in Line 427, is inversely proportional to sqrt{n_min}. In summary, when n_min or n decreases, we have a larger error bound.
“Section 5: why is error variance of 0.1^2 a good choice? Can you provide the corresponding signal-to-noise ratio? Section 3.2: Would it help in interpretation to normalize the coefficient of cause as well so that it lies in (0,1] as R^2?”
We chose the error variance of 0.1^2 following works on single causal networks [4, 13]. With respect to the signal-to-noise ratio (defined as var(γ_j Y_j )/var(Y_i |Y_{-i}) when node i’s value Y_i has causal contribution γ_j Y_j from node j), it ranges from .02861 to 9.8534 with average at .7559 and median at .2335 (based on one simulated dataset).
It is an interesting point on normalizing the coefficient of cause into [0,1]. One way is to calculate the averaged coefficient of cause via dividing it by the number of responders taking this driver. Alternatively, we may divide the total driver’s causal contributions in terms of variance by the sum of variances of all responders. Both measures should be read together with the total number of responders as a driver with too many responders may be of high interest even if the causal contribution to each responder is low.
“Please check the paper for typos (e.g. p8, line 257: n_2, n_3 instead?; p5, line 156: responders; p2, line 54: endogenous etc).”
Thanks for pointing that out. We will correct the typos in our revised version.
“I did not find a discussion of limitations of the method. Please include a brief discussion of these in the paper.”
Although we have developed a limited-information likelihood method to avoid optimizing too many model parameters as the full-information likelihood method does, the proposed method may still be challenged by large K and massive total sample size n when there are too many cohorts to compare. When K is too large, each task in the algorithm (identifying and estimating causal effects for a single responder) has to estimate K(p-1) parameters with an n×(K(p-1)) design matrix, possibly demanding a large amount of memory.
We have developed our algorithm for the case to compare all other networks to a single baseline network and provide theoretical analysis. In practice, we may be interested in the deviated effects of each network from the average effects. Our algorithm can be adopted for such a case. However, it is challenging to develop an appropriate theoretical analysis for this case, and deserves further study. | Summary: In this paper, the authors proposed NetANOVA, an algorithm that simultaneously constructs multiple causal networks and infer their disparities. Theoretical justification of the proposed method is also derived. The paper further proposes measures for variable’s contribution to receivers and responders. Overall, the problem seems solid and experiment results reveal its effectiveness.
Strengths: The task this paper is tackling is of interest and importance. The proposed method is unified on handling multiple causal networks simultaneously and is scalable in various aspects.
The derivation of the method is clear and reasonable. The authors discuss annotations and assumptions of their proposed method. They show theoretical justification of the algorithm, which consolidates their work.
Weaknesses: The paper is not really good to read. Section 2 starts deriving the method directly without further explanation and formulation of the problem. The authors should consider adding a subsection before the method to formally describe the problem.
The experiments seem good. However, no related methods are compared. The authors mentioned several related works in the introduction and they highlighted advantages of the proposed method (scalability in various aspects). The experiments do not discuss how their methods compare with others.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How is the proposed method compared with related works?
The model is derived on the basis of several assumptions. While in experiments, do those assumptions really hold on the datasets?
Do the authors conduct runtime analysis of their proposed method?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: It will be good if the authors can provide a formal definition on the problem they are trying to solve.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: “The paper is not really good to read. Section 2 starts deriving the method directly without further explanation and formulation of the problem. The authors should consider adding a subsection before the method to formally describe the problem.”
The main purpose is to construct and compare multiple causal networks to gain knowledge of cross-cohort changes in causal effects as well as commonly shared ones (with a focus on deviated causal effects of other networks from a baseline). We are sorry that we have to make the description concise to include all aspects of the algorithm development as well as the theoretical/empirical evaluation. We have a rather longer version of the manuscript and would be happy to make it available in arXiv.
“The model is derived on the basis of several assumptions. While in experiments, do those assumptions really hold on the datasets?”
All networks are simulated according to Assumption 1. Assumptions taken for theoretical works are mostly weak and widely adopted in the literature. Specifically, in our experiment, both Assumptions B.1 and B.2 put assumptions on the spectrums of the variance and covariance matrix as well as the IV size by Assumption B,1, and are satisfied by our simulation settings. Assumption B.3 holds because it is about augmenting the design matrix but the order of the eigenvalues remains unchanged in our simulation. Assumption B.4 requires limited association between drivers and non-drivers, which also holds in our case.
“Do the authors conduct runtime analysis of their proposed method?”
For our simulation study with one core on Rome CPU @ 2.0 GHz, analysis of one simulated dataset (without parallel computation) took 11,001s, 23,508s, and 47,883s for sample sizes 200, 500, and 1000 respectively. The use of one node with 128 cores in HPC would cut the time to 86s, 183s, and 374s respectively, taking advantage of the parallel nature of the algorithm.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' review. I would like to keep my current raing. | Summary: The paper introduces NetANOVA, an algorithm designed for parallel computation to construct a unified structural model for multiple causal networks, or DCFs. NetANOVA utilizes analysis of variance (ANOVA) to identify causalities that differ across networks, as well as important drivers and responders. It is scalable to large data sizes, computational environments, and model complexities, allowing for efficient analysis of multiple networks.
Strengths: 1. The paper introduces a two-stage parallelizable algorithm that scales when multiple cohorts exist.
2. The paper gives solid theoretical results on the consistency for coefficients of determination and cause.
Weaknesses: 1. Experimental results on real data is limited to three DCGs, and it does not contain comparisons with baseline approaches.
2. The theoretical result does not include the computation complexity of NetANOVA.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In NetANOVA, most computations are matrix products and inversions. Can you briefly estimate the computation cost required when n and k scales?
2. In Figure 4, are DCG(I/II/III) curated by humans? If so, what is the cost of curating each DCG?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See "weaknesses."
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: “In NetANOVA, most computations are matrix products and inversions. Can you briefly estimate the computation cost required when n and k scales?”
Assuming bar{n}=sum_{k=1}^K n^{(k)} the average sample size, we can break down the computational complexity as follows: The complexity associated with ISIS is O(K bar{n} p), complexity with ridge regression is O(K bar{n}^3 ), projection’s complexity is O(K bar{n}^3 ), complexity introduced by adaptive lasso is O(K^2 bar{n} p). Collectively, the computational cost for each node is O(K^2 bar{n} p) + O(K bar{n}^3).
“In Figure 4, are DCG(I/II/III) curated by humans? If so, what is the cost of curating each DCG?”
For each sample size (200, 500, or 1000), we have randomly simulated 100 sets of networks (with each set including 3 DCGs) as described in Lines 229-236, and accordingly simulated one dataset from each set. Figure 4.a shows a part of one simulated set with a sample size 500 (the whole network is shown in Figure 6). We applied our algorithm to the data simulated from this set and conducted correspondence analysis of the results. Figure 4.b-d show the plot of the correspondence analysis. Figure 4.a was obtained via the freely available software Cytoscape. and Figure 4.b-d was from R with packages ggplot and ggrepel. | Summary: This paper presents a unified structural model that describes multiple DCGs in one model and develops a limited-information-based method to simultaneously infer networks and their disparities. Furthermore, it provides robust non-asymptotic theoretical properties. And it is applied to synthetic and real datasets to show its performance.
Strengths: The paper proposes a new model that manages to describes multiple DCGs within a single model, and further performs a comprehensive study on it, including algorithmic development, theoretical analysis and synthetic and real-data experiments.
Weaknesses: The paper proposes such a model that describes multiple DCGs by stacking K-network-specific structural models in matrix forms with one network chosen as the baseline. However, we fail to see the benefits of proposing such a "unified" model, nor any mathematical novelty in model construction. In the problem setup, there is no relation between or information shared by K networks. And in algorithm development, the algorithm is almost like dealing with network-specific parameters independently. In experiments, we did not see any benefits this approach manifests.
Due to this straightforward way of construction, it can be foreseen that the algorithmic development and theoretical analyses naturally follow without particular technical issues. For algorithm development, it seems more or less doing network-specific inference and then stacking results. The theoretical results are mainly these from high-dimensional statistics (specifically l1-related theory on variable screening and selection). Considering the almost independent treatment to each network in algorithm, the theoretical results naturally follow by repeating these theories from analysis of structural models; thus, the introduction of these high-dim stats tools may not be highly appreciated.
The simulation study and real-data applications only show the "success" application of the proposed model. Without the basic comparison to the state-of-the-art or the independent treatments to each network using classical structural model methods, we have no idea on how good the performance of the proposed method is and what additional benefits we gain by modelling multiple networks via a single model.
The literature review in Introduction is not good, which takes less than 1/3 space of Introduction. It is not a critical review, and it basically lists 6 papers (line 24-38) and tells what they did. These work are neither being explained on their relations to this paper nor commented on their strengths/weakness/debates. Just a list of work, which however is not sufficient to cover important work that closely relates to methods and theories in this paper. What's worse, the sentence at lien 25-26 reads like something irrelevant copied from other biological papers.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the weakness section. We do not have any further technical questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: We do not see any potential negative societal impact of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: “… fail to see the benefits of proposing such a "unified" model, nor any mathematical novelty in model construction. … no relation between or information shared by K networks.”
We disagree with the reviewer’s claims on the benefit and novelty. To the best of our knowledge, we are the first to study and compare multiple causal networks, which allows us to address two types of problems: i) Deviation of other networks to a single baseline (e.g., network of the normal/healthy population); ii) Variation (i.e., difference from each other) across multiple networks.
Our method is novel in its strategically unifying multiple causal networks, shown from Line 86 to Line 95, which allows developing the algorithm NetANOVA to directly identify and estimate deviated causal effects when compared to a single baseline. We are also the first to explicitly define coefficients of determination and coefficients of cause for causal networks. Our proposed correspondence analysis of causal effects across multiple networks is also new with its defined summary statistics which take advantage of parallel computation to obtain bootstrap results.
The theoretical benefit of using a unified model is further demonstrated in theorem 4.1. As shown in Theorem 4.1, the error bound of each hat{β}\_i is related to n=sum_{k=1}^K n^{(k)} due to the deal with the unified model. Instead, an independent construction of the k-th network will result in an error bound related to n^{(k)}.
Please be aware that, i) the reparametrization in (3) requires a pooled model for all involved networks; ii) with K-th network as the baseline, the parameters in β\_i^{(k)} show the difference between k-th network and the baseline; iii) although the algorithm allows heterogeneous instrumental variables across different networks and therefore may conduct the first stage independently, the second stage has to identify and estimate the deviated causal effects in hat{β}_i which are defined across networks. That is, at this key step, the unified model is directly constructed at Stage 2.3 of Algorithm 1. More details on the second stage can be found under “The inference stage” starting at Line 140.
“… the algorithmic development and theoretical analyses naturally follow without particular technical issues. … thus, the introduction of these high-dim stats tools may not be highly appreciated.”
We disagree with the reviewer’s claim on the theoretical analysis (please refer to our above rebuttal on algorithmic development). For our first-ever algorithm that directly detects and compares multiple causal networks, the error bounds of the corresponding parameters, especially {hat{β}\_i, i=1,2,...,p}, is not a straightforward extension of the results from individual networks. As shown in Theorem 4.1, the error bound of each hat{β}\_i is related to n=sum_{k=1}^K n^{(k)} due to the deal with the unified model. Instead, an independent construction of the k-th network will result in an error bound related to n^{(k)}.
Technically, theoretical analysis of causal networks is challenged by inherent endogeneity and henceforth developed two-stage algorithms. Unifying multiple networks to enable direct comparison of multiple networks indeed makes it even more challenging in controlling error bounds while taking advantage of datasets from multiple populations.
We focus our research on studying the deviation of other networks to a single baseline (e.g., network of the normal/healthy population), the first type which is popular in disease studies, especially cancer studies. As we stated in Line 89, our algorithm can be extended to study variation (i.e., differences from each other) across multiple networks. However, the parameters of a possibly unified model are involved with parameters from all networks, making it much more difficult (so further efforts are demanded) to develop an appropriate theoretical analysis.
“The literature review in Introduction is not good, …. It is not a critical review, and it basically lists 6 papers (line 24-38) and tells what they did.”
As we are presenting a first-ever algorithm for constructing and analyzing multiple causal networks based on structural models, we focus on our review of related work. The three methods (Liu et al., Cai et al., and Che et al.) are the major development along this direction but for a single network.
“These work are neither being explained on their relations to this paper nor commented …”
We disagree with the reviewer because we described the major idea of each method with comments on its advantages/disadvantages.
“What's worse, the sentence at lien 25-26 reads like something irrelevant copied from other biological papers.”
We again disagree with the reviewer. We have this sentence here to emphasize that constructing structural models with instrumental variables is mainly developed in the field of bioinformatics because natural instrumental variables are available in studying gene regulatory networks. This sentence leads to the review of the developed methods for structural models in bioinformatics.
---
Rebuttal Comment 1.1:
Comment: We thank the authors a lot for addressing carefully on the technical points we have missed. Indeed, after diving into your analysis details together with your responses, we agree that we largely underestimate your contributions. Some technical points are quite subtle, which led us to overlook the contribution of introducing an "unified model" (What we thought can be equivalently done by separately analyzing each network is actually not working or fails to hold competitive performance in theory).
Thanks for your efforts on explaining the details. We have updated our grading.
---
Reply to Comment 1.1.1:
Comment: Thank you for updating your grading. Please let us know if you have any concerns. | Rebuttal 1:
Rebuttal: We sincerely appreciate all the reviewers for providing constructive comments, which provide us a chance to clarify some confusions and improve the quality of our work. We have carefully been through each comment and done our best to address each. While we have addressed each reviewer’s points in separate rebuttals, here we would like to address a common concern on “Lack of comparison to the state-of-the-art or the independent treatments to each network using classical structural model methods”.
To the best of our knowledge, our algorithm is the first one developed for multiple causal network analysis and there is no other state-of-the-art method available to compare to.
Pooling independently constructed networks seems appealing in terms of computation. However, as shown in Theorem 4.1, the error bound of each hat{β}\_i from our algorithm is inversely proportional to n=sum_{k=1}^K n^{(k)}. However, an independent construction of the k-th network will result in an error bound related to n^{(k)}. Furthermore, due to the high-dimensional property, the estimates of many parameters may follow mixture distributions. That is, unlike classical low-dimensional problems presenting asymptotic normal distributions, the estimated values of bootstrapped high-dimensional datasets may be mixed with many zeros and nonzero values. Therefore, even in the case that the computational challenge overtakes statistical efficiency so we have to pool independently constructed networks, it demands further development of appropriate strategies.
Because of no state-of-the-art method to compare and difficulty in pooling results from independently constructed networks, we focus our simulation studies on evaluating the feasibility of our algorithm, and its efficacy over different sample sizes. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes an algorithm, called NetANOVA, for constructing a unified structural model for multiple causal networks with cycles. The algorithm is designed for parallel computation and is scalable to data size and network complexity. It is able to infer causal networks beyond directed acyclic graphs (DAGs), such as directed cyclic graphs (DCGs), which are demanded to depict gene regulatory networks. The paper provides theoretical justification for the algorithm and demonstrates its feasibility and promise with a large-scale simulation study and a real data analysis to compare gene regulatory networks in healthy lung tissues and lung tissues with two types of cancer.
Strengths: * It proposes a method to unify multiple cyclic graphs with a single structural model and also accommodate their disparities, which is a promising method for analyzing gene regulatory networks where feedback loops are regularly encountered.
* The algorithm is designed for parallel computation and is scalable to data size and network complexity.
Weaknesses: * Instrumental variables-based identification of causal effect are proposed for DAGs. There is a lack of discussion on how this extends to causal structures that contains cycles. It is unclear to me if the identification results in 2.3 is correct when cycles exists.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * The word "variational" is used frequently. However, this word usually refers to well-known methods/concepts in machine learning, such as variational inference. Is this word choice for "variational causality" validated?
* Regarding the weakness, how would the existence of cycles affect identification of causal effects using IV? Since most results on IV are either discussed under the potential outcome framework, or discussed using do-calculus when the underlying causal structure can be described by DAGs.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors do not explicitly discuss the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: “Instrumental variables-based identification of causal effect are proposed for DAGs. There is a lack of discussion on how this extends to causal structures that contains cycles. It is unclear to me if the identification results in 2.3 is correct when cycles exists.”
We agree that IV-based methods are mainly proposed for DAGs. However, the econometrics field has observed much development in systems of equations, including model identification, under the name "simultaneous equation models [20]. Our specification of model identification is adopted from the rank condition developed for “simultaneous equation models” but in terms of instrumental variables. A rather recent work on identification can be found in the work by Matzkin (2008).
Matzkin, Rosa L. "Identification in nonparametric simultaneous equations models." Econometrica 76.5 (2008): 945-978.
“The word "variational" is used frequently. However, this word usually refers to well-known methods/concepts in machine learning, such as variational inference. Is this word choice for "variational causality" validated?”
Thanks for highlighting the potential ambiguity. In our manuscript, we employed the term “variational” to refer to variation/deviation across multiple networks. To avoid any confusion and ensure clarity, we will replace “variational” with “perturbational” in the revised version of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for replying to the comments. For the second point, I think your response is fine. For the first point (identification), can you please be more specific as the reference mentioned in the rebuttal is not in the original manuscript?
---
Reply to Comment 1.1.1:
Comment: The system of equations is classically discussed in econometrics and is known as "simultaneous equation model” [20]. Furthermore, the systems of equations under “simultaneous equation models” [20] cover directed cyclic graphs (DCGs) as well as DAGs. The rank condition is a well-known yet fundamental identification result in econometrics which can be found in every textbook of econometrics wherever it introduces simultaneous equation models.
The rank condition is necessary and sufficient for a system of equations to be identifiable. Our Assumption 1 basically states that any driver in a causal system should have its unique instrumental variable(s), which makes the rank condition hold. Our mention of the new reference by Matzkin (2008) only intends to point to a recent study in identification (we don’t need this reference to claim the identification of our models). Indeed, the work by Matzkin (2008) provides the identification of more general nonparametric models (which certainly cover the models in our paper).
We consider that Assumption 1 leading to the rank condition is obvious, so we don’t think it is necessary to further detail in the text. Per the reviewer’s inquiry, we would like to illustrate more on this point here to consider a system with p endogenous variables in Y and q exogenous variables in X, which is described by YΓ+XB=ε. Without loss of generality, we consider the first equation for the first endogenous variable Y_1 as Y_1+Y_{R_1}γ_1+X_{S_1}β_1= ε_1. Note that R_1 includes the indices of all drivers of Y_1 and S_1 includes the indices of all instrumental variables unique for Y_1. Accordingly, we can specify the other p-1 equations together as Y_1 Γ_1+Y_{R_1} Γ_2+Y_{-[1 R_1]} Γ_3+X_{S_1} B_1+X_{-S_1} B_2=ε. The rank condition states that, the first equation for Y_1 is identifiable if and only rank([Γ_3,B_2])=p-1. Apparently, Assumption 1 suggests that rank([Γ_3,B_2])=p-1 because any driver of Y_1 suggests nonzero components in its own row of B_2 (and zeros in other rows of B_2), and non-drivers of Y_1 are categorized into two groups: those with each having its own instrumental variables (so having nonzero components in its own row of B_2 but zeros in other rows of B_2), and those without any instrumental variables so not driver of any other endogenous variable (so have nonzero components in its own row of Γ_3 but zero components in other rows of Γ_3). | null | null | null | null | null | null |
Representation Learning via Consistent Assignment of Views over Random Partitions | Accept (poster) | Summary: The authors propose a new method for self-supervised learning based on cluster assignments. The method is based on a consistent assignments approach that assigns the same prototype to different views of the same image. To overcome issues of the previous methods that are not well scalable, a divide and conquer approach is proposed. Radom partitions of the learnable prototypes are created each iteration/epoch that allows not only make the training to be stable but also faster. The authors show the efficiency of the approach across a wide range of different datasets and settings.
Strengths: S1: The paper is well written, it’s clearly stated what is novel and what is borrowed from the previous methods, and the explanations of the approach are easy to follow.
S2: The idea of creating subtasks to reduce computation time and avoid collapse sounds novel.
S3: As it is empirically demonstrated, the proposed idea improves the performance over multiple benchmarks on kNN and retrieval tasks while being on-par with linear probe.
Weaknesses: W1: would be interesting to see how the methods work with transformer based architectures. e.g. within moco-v3 framework?
W2: the contributions are not clearly stated apart from the first one: citation [9] in contribution is wrong, it’s SimCLR contrastive approach that should belong to negatives. Moreover, the authors use entropy maximization as a form of normalization which the authors even state as a separate contribution. But this is cannot be a separate contribution because it was used before as a technique to avoid trivial solutions e.g. in [39].
W3: the Fig.7 is not helpful at all. It was not possible to understand what it is depicted there. The text (not caption) is clear though.
W4: In the intro, the authors introduce two classes of SSL. It is too restrictive division and does not cover all SSL methods. Where e.g. MAE lies?
W5: [36] is not really recent while the authors refer to it as “were recently proposed” on L41
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1: what if to have hard pseudo-labels instead of soft assignments?
Q2: L218 - it is an interesting observation, could the authors make an analysis and confirmation of that?
Q3: what is the value of \lambda_e, no information in implementation details? how does the performance change if to change this parameter?
Q4: could the authors also add a discussion on the differences to SwAV?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors address limitations in the form of discussion wrt the previous work based on which they propose the new approach. The section provides nice insights into the limitations of the previous method but also shows rather limited novelty of the current approach over the previous one. The method section is rather describing the previous approach as a new one that can be confusing. However, I find the proposed divide-and-conquer approach very interesting and helpful based on the results. It would be helpful to rewrite partially the paper making more focus on the proposed contribution rather than on the previous method with additional analysis/confirmation of all the statements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: would be interesting to see how the methods work with transformer-based architectures?
We thank the reviewer for the suggestion. It is in our plans to explore how the proposed random partition pretext task would behave with other architectures, such as ViTs. However, we chose ResNets as the main architecture mainly because most SSL methods provide baselines with ResNet backbones. While recent SSL methods are switching to ViT, given our computing budget, running experiments with more than one architecture would be impractical. Thus, we decided to pick one base architecture.
> W2: the contributions are not clearly stated apart from the first one [...] the authors use entropy maximization as a form of normalization [...] But this is cannot be a separate contribution because it was used before as a technique to avoid trivial solutions e.g. in [39].
**Our main contribution is the introduction of the random partition pretext task.** In addition, CARP does not require normalizing the embeddings to a hypersphere (required by other methods), nor does it require mining negatives during training. As the reviewer pointed out, the mean entropy maximization of the probabilities has been used to avoid collapse. However, **CARP does not use it as other methods have**, such as TWIST and PAWS. In CARP, the mean entropy is maximized over the random partitions, i.e., we maximize it over many subproblems constantly changing during training. *We believe that this process removes noisy patterns from the processed data and recovers stable representations due to the stochastic process of the partitions*. The importance of our pretext task to avoid training collapse and improve representations' power is shown in Table B1, appendix.
> W3: the Fig.7 is not helpful at all. It was not possible to understand what it is depicted there. The text (not caption) is clear though.
There is no Fig. 7 in the paper.
If you referred to Fig. 2, it was meant to describe the random partition pretext task. Other clustering-based SSL methods pose the view assignment task over the entire set of learnable prototypes. Conversely, CARP's random partitions break the prototypes into smaller subgroups. Each subset of prototypes is used to optimize a view classification task in parallel. In Fig 2, we have 2 views and 8 prototypes. The prototypes are randomly divided into 2 groups. For each group, (1) we enforce consistent assignment of views and (2) maximize entropy over the batch.
> W4: In the intro, the authors introduce two classes of SSL. It is too restrictive division and does not cover all SSL methods. Where, e.g., MAE lies?
We apologize for overseeing the taxonomy and commit to updating it. Our motivation was to provide a simple and not exhaustive categorization for SSL methods based on joint-embedding architectures. We agree that MAE is an SSL method. However, MAE is an autoencoder (encoder/decoder) model based on pixel reconstructions, which makes it different from joint-embedding methods that operate in the embedding space that we were interested in presenting.
> W5: [36] is not really recent while the authors refer to it as “were recently proposed” on L41
We thank the reviewer for the suggestion. We will reword this phrase in the final version of the manuscript.
> What if to have hard pseudo-labels instead of soft assignments?
CARP’s consistency loss is optimized when the views' probability distributions are identical one-hot vectors (hard assignments). However, sharpening the distributions toward hard assignments is an iterative process and converges in the limit. To address the reviewer's concern, we trained CARP using hard assignments instead of soft ones. We noticed that training becomes unstable and collapses right in the beginning. We hypothesize that the soft assignments have a regularization effect and prevent the embeddings from being assigned to the same prototypes.
> Q2: L218 - it is an interesting observation, could the authors make an analysis and confirmation of that?
In the general rebuttal PDF, we show evidence that the sharpening operation is built-in CARP's loss function. During training, the probability distributions for predictions and targets behave like sharpened one-hot vectors, as shown by the maximum and minimum values of the distribution converging towards one and zero, respectively. The distributions of different sharpening values produce a similar trend.
> what is the value of $\lambda_e$, no information in implementation details? how does the performance change if to change this parameter?
In practice, the value of $\lambda$ is 1. One of the benefits of our proposed random partition pretext task is a regularization effect that helps prevent collapsed solutions. CARP does not require tuning the $\lambda$ parameter that defines the contribution of the entropy term to the final loss.
We thank the reviewer for pointing out this missing piece. *We will add a description of this parameter in the final version of the manuscript.*
> Could the authors also add a discussion on the differences to SwAV?
The most important difference between CARP and SwAV is the way they avoid trivial solutions. While SwAV uses a non-differential iterative module (Sinkhorn–Knopp) to balance the target predictions over the learned prototypes, CARP uses the random partition pretext task combined with the mean entropy maximization. Plus, SwAV requires normalized embeddings to avoid NaNs during training; CARP does not. SwAV enforces consistent assignment between views using the cross-entropy loss and requires extra hyperparameters to sharpen the probability distributions. CARP, on the other hand, **has the sharpening of the probabilities built into its consistency loss**. CARP's consistency loss is optimized when the probability distributions of the views are identical one-hot vectors. CARP's consistency loss avoids extra hyperparameters and sharpens the distributions by design.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' detailed response. After reading the rebuttal and the comments of the other reviewers, my concerns have been addressed and thus I maintain towards the acceptance of this work.
---
Reply to Comment 1.1.1:
Title: Thanks to reviewer's work
Comment: We thank the reviewer for the relevant work reviewing our manuscript and for the final position towards acceptance of our work. | Summary: This work extends the existing work about consistent assignment for representation learning by introducing random partitions. Specifically, when the number of prototypes is large in clustering-based self-supervised learning, the work shows that CARL [36] cannot handle the loss and the regularization well. To mitigate the problem, CARP is proposed to obtain subsets of prototypes and then apply CARL on the partitions of prototypes. Standard evaluation protocol is included to compare the proposed method to benchmark methods.
Strengths: 1. The clustering-based method shows the potential for self-supervised learning.
2. The strategy of random partition can avoid the problem from a large number of prototypes.
3. Experiments are extensive on diverse data sets.
Weaknesses: 1. As a closely-rated work, the comparison to CARL lacks, which makes the contribution of this work unconvincing. Moreover, the original paper of CARL shows that it works well with 10000 prototypes, which contradicts the claim of this work.
2. The performance of clustering-based method in the comparison is out of date. CoKe [1] as an online clustering method demonstrates 74.9% and 76.4% linear probing performance on ImageNet with two-view and multi-view augmentations, respectively. Moreover, CoKe does not request an additional momentum encoder and a sharpening temperature for the pseudo-label from the other view, which makes the contribution of this work less significant.
3. While CARP shows a competitive performance with less training epochs, it is better to show if it can achieve a better performance with the long training strategy.
[1] Unsupervised Visual Representation Learning by Online Constrained K-Means. CVPR 2022.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Besides the weakness, my major concern is about the limited contribution compared to CARL.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > [...] my major concern is about the limited contribution compared to CARL.
The main difference between CARL and CARP is the random partition pretext task and its positive effects on training SSL models. As described in Section 3.1, training a system like CARL is difficult due to stabilities, requiring tuning the entropy term and dealing with the negative trade-off. **Our pretext task diminishes the trade-off effect between the two losses and allows stable training with better outcomes.** Moreover, the entropy regularizer in CARP is also optimized over the prototypes within the subsets (which differs from CARL), contributing to stable training and high-quality representations (high scores in transfer learning benchmarks). Another part of our contribution is the **understanding of how we can stabilize training and avoid collapsed solutions through a stochastic sampling process** that is achieved with the proposed random partition pretext task.
> [...] the comparison to CARL lacks, which makes the contribution of this work unconvincing. [...] CARL [...] works well with 10000 prototypes, which contradicts the claim of this work.
Even though CARL can be trained with many prototypes, **one needs to carefully tune the weight contribution for the entropy term to avoid training collapse.** As discussed in Section 3.1, **representation quality decreases if the entropy term is too high, and collapse might happen if it is too low.**
We show evidence of this effect in Table B.2 in the appendix. As we increase the size of the partitions, performance decreases. The last column (Table B.2) is equivalent to **not having random partitions** (i.e., CARL), where we see collapse due to a large number of prototypes. The random partition pretext task addresses these limitations by improving overall performance and increasing training stability.
CARL is focused on small-scale datasets such as Cifar-10/100 and STL-10, while our work's primary pre-training data is ImageNet. Nevertheless, to address the reviewers' concerns, we took CARL's official code (gitlab.com/mipl/carl), and pre-trained it on ImageNet for 200 and 400 epochs using 3000 prototypes (we experienced collapse with 10000). The results below show that CARP outperforms CARL on linear evaluation by considerable margins.
\begin{array} {rr}\hline
Method&Ep&Acc&kNN\\\\
\hline
CARL&200&65.3&59.4\\\\
&400&70.4&62.8\\\\
CARP&200&74.0&66.5\\\\
&400&75.0&67.7\\\\
\hline
\end{array}
> The performance of clustering-based method in the comparison is out of date. CoKe [1] as an online clustering method demonstrates 74.9% and 76.4% linear probing performance [...]
To address the reviewer's concern, we took the officially released 800 epoch model from CoKe's repository and ran our benchmark evaluations. We present the results below.
k-NN transfer learning (k=20), paper's Table 1
\begin{array}{lrcrrrrrrrrrrr}
\hline
Method&Ep&Pets&Flowers&Aircraft&Cars&Country&Food&STL>SRB&Avg @k&&&\\\\
&&&&&&&&&&10&20&100&200\\\\
\hline
CoKe&800&79.5&79.5&27&22.4&14.6&58.9&95.7&60.4&57.1&57.4&57.1&56.3\\\\
CARP&400&86.8&80&42.1&33.5&12.3&58.4&95.9&75.3&60.4&60.5&59.7&59.2\\\\
\hline
\end{array}
Clustering evaluation, paper's Table 2
\begin{array}{lrrrrrrrrrrrr}
\hline
Method&ImgNet&&&Cifar10&&&Cifar100&&>SRB&&\\\\
&NMI&AMI&ARI&NMI&AMI&ARI&NMI&AMI&ARI&NMI&AMI&ARI\\\\
\hline
CoKe&68.9&45.6&21.3&45.9&45.8&34.2&51.9&45.2&19.5&49.4&47.1&13.7\\\\
CARP&70.3&48&23.9&49&48.9&38.7&54.5&48.2&23.1&54.8&52.7&19.6\\\\
\hline
\end{array}
CoKe performs well when trained and evaluated on ImageNet. However, on transfer tasks, **CARP outperforms CoKe by large margins on both k-NN and k-means evaluations.**
> Moreover, CoKe does not request an additional momentum encoder and a sharpening temperature for the pseudo-label from the other view, which makes the contribution of this work less significant.
We'd like to emphasize that CoKe does sharpen the probability distributions in its loss function. If we look closer at CoKe's code (github.com/idstcv/CoKe/blob/main/main_multi_view.py#L351C33-L351C33), we can see two losses (1) loss_pred and (2) loss_proj. The first is a hard loss, where predictions are sharpened. The second is a soft loss, where predictions and target distributions are sharpened before softmax (refer to parameter $coke_t$).
In fact, CARP and CoKe's loss functions differ significantly. While CoKe uses a sharpened cross-entropy loss for both hard and soft losses, **CARP proposes a consistency loss where the goal is to minimize the negative log of the product of the views probability distributions**. In the general rebuttal PDF, we show evidence that the sharpening operation is built-in CARP's loss. During training, the probability distributions for predictions and targets are automatically sharpened toward one-hot vectors.
Also, CARP's contributions **go beyond architectural and implementation details such as momentum encoders and use of temperatures** and mainly **focus on a novel approach to learning representations through prototypes that stabilizes training, avoids collapses, and yields transferable representations.**
> While CARP shows a competitive performance with less training epochs, it is better to show if it can achieve a better performance with the long training strategy.
To address the limitations pointed out by the reviewer, we trained CARP for 800 epochs with multi-crop. CARP archives 75.9% linear probing on ImageNet, which beats methods such as DINO and SwAV (75.3%). While we understand the reviewer's point of view, we emphasize that *an efficient method is cheaper, allows faster training with reduced hardware costs and emissions (democratizing research), and has strong inductive biases* (allowing the model to extract patterns quickly).
---
Rebuttal Comment 1.1:
Comment: I appreciate the efforts of the rebuttal. However, it did not address my major concerns.
1. For the comparison with CARL, the experiment on ImageNet has collapsing as stated in rebuttal. However, it can be avoided by tuning the weight of the entropy. Since CARL does not have the results on ImageNet, is the parameter of CARL tuned for ImageNet? Moreover, Table 1 in the original paper of CARL shows that it works well with 10,000 prototypes on CIFAR-100. Therefore, 10,000 prototypes should not be a big issue on ImageNet. Tuning the parameter for CARL on ImageNet can be expensive, so it is better to have CARP on CIFAR-100 for a fair comparison. Moreover, the proposed method eliminates the parameter for the entropy while introducing an additional parameter for the partition size, which may not save tuning efforts.
2. For the comparison with CoKe, I note that the CoKe with multi-crop augmentation is adopted for the comparison while CARP has a different setting. Compared with CARP with multi-crop augmentation as in Table 1 in the submission, the gap is quite marginal, i.e., 0.3% on Avg@k. A fair comparison is essential to draw any conclusion. Besides, the phenomenon for the degenerated performance from the multi-crop augmentation is interesting and worth an additional discussion.
Some minor issues.
1. Many deep clustering methods also report clustering results on the benchmark data sets as in Table 2. For example, CoKe reports 76.6% NMI on CIFAR-10 as a clustering method, which is much better than the result in this work. The setting of clustering in this work can be highlighted in the caption of Table 2 to avoid misunderstanding.
2. Unlike [7,8] that have different temperatures for pseudo labels and predictions, CoKe has the same temperature and the label is not sharpened. While this work does not have the temperature, the statement can be more accurate.
---
Reply to Comment 1.1.1:
Title: Reply to rebuttal comments 1/2
Comment: > Since CARL does not have the results on ImageNet, is the parameter of CARL tuned for ImageNet.
As the reviewer mentioned, CARL was not trained on ImageNet. We did the hyperparameter search for the number of prototypes due to our experience executing CARL. When CARL was trained with 10000 prototypes, training collapsed. We kept the entropy weight as 2 (following CARL’s guidelines for other datasets) and reduced the number of prototypes to 3000 (where we observed no collapse).
To fully address the reviewer’s concerns, we pre-trained multiple setups of CARL on ImageNet for 100 epochs. We fixed the number of prototypes as 10000 and ablated different values of lambda (entropy weight). Results are below.
\begin{array} {rrrrrr}
\hline \lambda &1&2&3&4&5\\\\
\hline
CARL&C&C&65.1&64.8&63.9\\\\
\hline
\end{array}
As discussed, using 10000 prototypes causes collapse (**C**), and the workaround is to increase the entropy weight. For reference, CARP 100 epoch model with 2 views achieves 69.7%.
As discussed in Section 3.1, learning that many prototypes at once with the entropy term, as in CARL, leads to underperformance.
> Table 1 [...] CARL shows that it works well with 10,000 prototypes on CIFAR-100. Therefore, 10,000 prototypes should not be a big issue on ImageNet.
We respectfully disagree with the reviewer on this point. The training of CARL on a much simpler dataset with only 100 classes cannot be extrapolated to a more complex dataset with 1000 classes (10 times more). Moreover, CARL’s paper discusses the relation of the number of prototypes and the number of classes (Fig. 3 in CARL’s paper) which is in line with the observed behavior in our experiments. **In CARP, we show that there is a lack of generalization in the usage of prototypes and propose a solution.**
> [...] it is better to have CARP on CIFAR-100 for a fair comparison.
The results comparing CARP and CARL on CIFAR-100 are below.
\begin{array}{rrrrrrr}
\hline
& Cifar10&&Cifar100&&STL10&\\\\
\hline
Ep. &100&200&100&200&100&200\\\\
\hline
CARL&73.39&78.94&42.91&48.85&76.9&81.95\\\\
CARP&74.84&79.52&44.67&50.64&78.05&82.44\\\\
\hline
\end{array}
We pre-trained CARP on CIFAR-10/100 and STL-10 following the CARL’s guidelines (Table 2 on CARL’s paper). We report average top-1 (linear probing) results across 3 independent runs (same as CARL). The number of prototypes was set to 100, 300, and 300 for Cifar-10,-100, and STL-10, respectively (same as CARL), and the partition size was set to 50 for all datasets (no tuning was done to select this partition size). We report models trained for 100 and 200 epochs. CARP outperforms CARL in all datasets. These experiments showed that CARP works well for simpler datasets as well as in ImageNet. By default, CARP uses $\lambda=1$ while CARL uses larger values to avoid collapse, e.g., $\lambda$=2, which explains CARP’s improvements.
> Moreover, the proposed method eliminates the parameter for the entropy while introducing an additional parameter for the partition size, which may not save tuning efforts.
We agree with the reviewer that our method introduces a new hyperparameter (the partition size) and avoids the tuning of the entropy weight. However, **the entropy weight and the partition size have different tuning difficulties**. As discussed in Section 3.1, without the random partition pretext task, training collapses if the entropy term is too small, and accuracy is suboptimal if the entropy term is too high. On the other hand, CARP is robust to the choice of partition size, as shown in Table B.2 (appendix).
> [...] Compared with CARP with multi-crop augmentation as in Table 1 in the submission, the gap [with CoKe] is quite marginal, i.e., 0.3% on Avg@k. A fair comparison is essential to draw any conclusion.
As pointed out by the reviewer, when comparing CoKe with CARP with multi-crop, CARP still outperforms CoKe by a small margin (even though CoKe was trained for 800 epochs and CARP for 400). To clear the reviewer's concerns, we extended the previous comparison against CoKe to include all instances of CARP and an additional instance of CoKe trained for 800 epochs w/o multi-crop. CARP consistently outperforms CoKe by large margins.
\begin{array}{lrcrrrrrrrrrrr}
\hline
Method&Ep&Pets&Flowers&Aircraft&Cars&Country&Food&STL>SRB&Avg @k&&&\\\\
&&&&&&&&&&10&20&100&200\\\\
\hline
CoKe &1000&81.3&75.3&29.3&22.6&13.3&60&95.7&64.2&55&55.2&54.1&53.4 \\\\
CoKe (mc) &800&79.5&79.5&27&22.4&14.6&58.9&95.7&60.4&57.1&57.4&57.1&56.3\\\\
CARP&200&86.8&78.2&38.9&29.8&12.2&58.4&95.5&73.7&59.2&59.2&58.5&57.9\\\\
&400&86.8&80&\textbf{42.1}&\textbf{33.5}&12.3&58.4&95.9&75.3&60.4&60.5&59.7&59.2\\\\
&800&\textbf{87.3}&\textbf{81.2}&41.1&33.2&13.6&61.2&\textbf{97}&\textbf{76.4}&\textbf{61.2}&\textbf{61.4}&\textbf{60.4}&\textbf{59.7}\\\\
CARP (mc)&200&78.7&79.7&35&26.6&\textbf{14.5}&61.8&95.5&64.7&57.1&57.1&55.9&55\\\\
&400&83.9&80.3&34.8&27.1&14.2&\textbf{62.9}&95.5&62.8&57.6&57.7&56.8&56\\\\
\hline
\end{array}
---
Reply to Comment 1.1.2:
Title: Reply to rebuttal comments 2/2
Comment: > [...] the phenomenon for the degenerated performance from the multi-crop augmentation is interesting and worth an additional discussion.
We agree with this observation. We expect to study it further in the future. As it is now, this understanding is out of the scope of this work.
> [...] For example, CoKe reports 76.6% NMI on CIFAR-10 as a clustering method, which is much better than the result in this work. [...]
We highlight that CoKe’s results on CIFAR-10 were reported with an inter-dataset setup, meaning that it was trained and evaluated on CIFAR-10. In our experiments, CARP was pre-trained on the ImageNet dataset and evaluated on CIFAR-10 (transfer learning). The latter is a more challenging setup that evaluates the generalization and robustness of the learned representations. This explains the difference observed by the reviewer.
> [...] The setting of clustering in this work can be highlighted in the caption of Table 2 to avoid misunderstanding.
We reported the evaluation protocol in Appendix D.4 (as mentioned in Section 5.2). In addition, we will improve the caption in Table 2 to better convey the evaluation protocol..
> Unlike [7,8] that have different temperatures for pseudo labels and predictions, CoKe has the same temperature and the label is not sharpened. [...]
We do not understand the reviewer’s comment given that CoKe clearly uses the temperature. First, all sets of variables are sharpened here: github.com/idstcv/CoKe/blob/main/coke/builder_double_view.py#L98C1-L102C76. Then, used the in the softmax here: github.com/idstcv/CoKe/blob/main/main_double_view.py#L321-L323. Then, in the loss function here: github.com/idstcv/CoKe/blob/main/main_double_view.py#L330C1-L337C97.
We stated the differences between CARP’s and CoKe’s loss functions and the comparison against other methods, like SwAV [7] and DINO [8]. We would appreciate it if the reviewer could provide additional information about the concern raised in this question.
> [...] While this work does not have the temperature, the statement can be more accurate.
We will review our manuscript to improve the accuracy of how the new pretext task produces the sharpening and its relation with the temperature. As discussed in the other reviews, we will include the empirical evidence that the random partition process sharpens the prediction by design, as shown in the PDF rebuttal document.
We hope to have addressed all the open questions about our work. | Summary: This paper works on self-supervised representation learning. Under the setting of consistent clustering assignment between augmented views (SwAV-like), the authors found that when the number of prototypes is significantly larger than the batch size, the commonly used technique for avoiding trivial solutions fails. And they thus propose to randomly partition the prototypes into multiple subgroups to avoid the trivial solution. Experiments on extensive benchmarks, especially retrieval-based ones, validate the effectiveness of the proposed method.
Strengths: *Originality:* The major insight of this paper: entropy regularization fails when the prototype number is significantly greater than the batch size ($K \gg N$) is novel, and the solution that divides the prototypes into multiple subgroups, is straightforward and verified to be useful. Other components of this framework resemble multiple previous works (*e.g.*, SwAV, DINO, MSN).
*Quality:* The idea is clearly formulated and presented, and the method is well evaluated in extensive experiments, making it in shape of a solid paper.
*Clarity:* The idea is straightforward and the delivery is clear enough, reading it is smooth and I do not find trouble in understanding.
*Significance:* It tackles the assignment strategy in clustering-based self-supervised learning, and provides a solution for $K \gg N$, which is somehow useful.
*Reproducibility:* Code is provided in the supplementary material to facilitate reproduction.
Weaknesses: *Motivation*: Why do we need so many prototypes during pre-training? The major stream of self-supervised learning is to enlarge the batch size for better representation learning. The prototype number, in contrast, has not shown the necessity for very large numbers yet. In fact, in DINOv2 they have reduced $K$ from 65536 in DINOv1 to 4096, which is competent for learning good representations. Moreover, large number of prototypes means high computational cost in pre-training. And after pre-training, they are simply dropped and not used in downstream tasks. So why do we need so many prototypes?
*Technical Contribution:* In terms of technical contribution, this work proposes to randomly divide the prototypes into subgroups. The idea is clear and straightforward but the contribution might be limited.
*Evaluation*: The evaluation is mainly focused on $k$-NN or linear probing on retrieval-based benchmarks. However, for one thing, a good deep representation does not have to be linear, and performance under full fine-tuning (transfer learning) might be of higher interest. For another, one might be more interested in generic classification benchmarks (ImageNet) and dense prediction downstream tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I noticed that different tables may refer to different variants of CARP (or CARP w/ mc). Why is it and why multiple cropping does not help in e.g., Tab. A.1?
- One advantage of clustering-based methods is that they tend to suit ViTs better (e.g., in DINO). Why is this work restricted in ResNets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: limitations are discussed in Sec. 7
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Why do we need so many prototypes during pre-training? [...]
Based on our practical experience, **the optimal number of prototypes highly depends on the number of hidden classes of the dataset.** Due to ImageNet's high number of classes (1000), in practice, a higher number of prototypes produces better outcomes.
We ablated the number of prototypes in Section B.1 in the appendix. Practical results suggest that over-clustering is beneficial. **However, CARP proved to be robust to the number of prototypes.** Indeed, the **difference in k-NN performance when learning 65536 prototypes as opposed to 4096 prototypes is only 0.5%, which agrees with the reviewer's point about Dino-v2**.
While we agree with the reviewer's concerns about the large number of prototypes, our empirical evidence suggests that there is a trade-off that must be considered. However, to provide conclusive evidence and best practices, more work understanding this trade-off is needed. However, this is out of the scope of our current proposal.
> In terms of technical contribution, this work proposes to randomly divide the prototypes into subgroups. The idea is clear and straightforward but the contribution might be limited.
In summary, our main contributions are the proposal of a stochastic partitioning process to improve the learned representations' performance on downstream tasks, as demonstrated in our comprehensive evaluation protocol. Moreover, we empirically show that this process reduces the noise on the learned representations, stabilizes the training, and avoids its collapse.
> The evaluation is mainly focused on k-NN or linear probing on retrieval-based benchmarks. However, for one thing, a good deep representation does not have to be linear, and performance under full fine-tuning (transfer learning) might be of higher interest. For another, one might be more interested in generic classification benchmarks (ImageNet) and dense prediction downstream tasks.
We strived to design an evaluation protocol focused on two main aspects. First, we wanted to evaluate the learned representation in transfer learning scenarios. Second, the evaluation protocol should be diverse, including many distinct datasets with varying difficulty levels.
Our evaluation protocol includes over 15 datasets across many standard protocols, such as linear evaluation, few-shot classification, k-NN, k-means, image retrieval, and copy detection.
We agree with the reviewer that a good representation does not need to be linear. Nevertheless, **linear evaluations test the representations power** by using a simpler classifier, **giving a better intuitive sense of how good the representation is since it does not rely on other components**, such as the backbone architecture or the myriad of hyperparameters used to fine-tune. A good example is linear probing. Even though it is a linear protocol, many hyperparameters can influence the final result. Consequently, comparing linear performance between methods that used different training strategies becomes unreliable.
A good example is NNCLR [1], where the logistic regressor on top of the frozen representation is trained with the LARS optimizer and large batch sizes. Based on our experiments, this configuration produces significantly better final accuracy than the approach proposed by MoCo (vanilla SGD with small batch sizes). **With Linear evaluations based on k-NN** of k-Means, **the effect of external hyperparameters, choices of optimizers, or even the batch size, is reduced**, which in our view, **makes the comparison fairer.** Nevertheless, we support evaluation protocols that test different aspects of the representation.
[1] Dwibedi, Debidatta, et al. "With a little help from my friends: Nearest-neighbor contrastive learning of visual representations." ICCV. 2021.
> I noticed that different tables may refer to different variants of CARP (or CARP w/ mc).
> Why is it[…]
For our evaluation, we trained 4 instances of CARP in total. CARP 200 and 400 epochs without multi-crop augmentation and CARP 200 and 400 epochs with multi-crop (w/ mc). In the tables in the main text, we report the instances of CARP that performed best in the downstream tasks. Indeed, we did the same for the competing methods. In total, we compared CARP against 11 existing SSL methods. However, in the main text, we reported only a subset that performed well on a given downstream task due to space constraints. **We showed the full evaluation for all instances of CARP and all competing SSL methods in the appendix.**
> and why multiple cropping does not help in e.g., Tab. A.1?
As for the performance using multi-crop, it also caught our attention. In short, **we suspect that multi-crop augmentation may be causing many SSL methods to overfit to ImageNet's training data distribution.** As a result, we may see higher performance scores for linear probing on the same ImageNet, but modest transfer learning performance to other datasets and tasks. However, further experiments are required to fully understand the impact of multi-crop and the decrease in performance seen in our experiments. These evaluations are out of the scope of our current proposal, though.
> One advantage of clustering-based methods is that they tend to suit ViTs better (e.g., in DINO). Why is this work restricted in ResNets?
It is in our plans to explore how the proposed random partition pretext task would behave with other architectures, such as ViTs. However, we chose ResNets as the main architecture mainly because most SSL methods provide baselines with ResNet backbones. Indeed, recent SSL methods are switching to ViTs. However, given our computing budget, running so many long experiments with more than one architecture would be impractical. For this reason, we decided to pick one base architecture and do the best we could regarding training, ablations, and evaluations to make our proposal clear and robust.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their efforts in providing the response. My major concern is still about the motivation: *when and why do we need so many prototypes?* The authors claim that a higher number of prototypes produces better outcomes on datasets that have more hidden classes. Empirically this is reasonable for datasets that are as big or smaller than ImageNet ILSVRC12, but for them, the techniques proposed in this work are also not necessary as agreed by the authors. Hope to see the motivation validated in larger-scale datasets.
---
Reply to Comment 1.1.1:
Title: Reply to rebuttal comments
Comment: > [...] about the motivation: when and why do we need so many prototypes?
**(Why)** Our requirement for more prototypes stems from their ability to leverage local information derived from our learning objective (unsupervised local view agreement). This local data is insufficient to grasp coarser (global) clusters, but it effectively captures changes within the data manifold, as evidenced by our findings.
Based on our empirical experiments and intuitive understanding of our method, here's a breakdown of our earlier point:
With fewer prototypes, each one must encompass a larger set of images exhibiting substantial variations. Consequently, these prototypes must encapsulate more abstract concepts to maintain coherence among the represented images. Conversely, when the number of prototypes matches the training set's image count, the clusters become specific representations, resembling a k Nearest Neighbor problem. Our experiments revolve around investigating how the learned representations behave as we alter the prototype numbers (while keeping them below the sample count).
Due to the localized nature of our learning objective, operating on local views of the same image, learning more abstract representations becomes more challenging. To alleviate this challenge, we increase the prototype count. This adjustment lessens the demand for prototypes to grasp intricate abstractions, allowing them to focus on specific characteristics sufficient to represent the smaller set of images with minor variations.
This over-clustering of the data space effectively harnesses our sole source of information: local information. The large number of prototypes, coupled with a stochastic random partitioning process, induces a noise-filtering mechanism that enhances representation robustness, as our experiments illustrate.
**(When)** We aim to overparameterize within the bounds of having more prototypes than or equal to the true class count and fewer prototypes than the data points in the training set.
In the unsupervised scenario, the class count remains unknown. Thus, our task involves exploring the dataset to strike the right balance among prototype count, the unknown class quantity, and the sample number. This process resembles k-means clustering, where data exploration guides the identification of optimal cluster patterns.
Setting up fewer prototypes than true classes results in prototypes acting as higher-level categories. Conversely, matching the prototype and class counts yields generalized representations that remain consistent despite class variations. Exceeding true class numbers leads to more specific representations capturing local changes within the data manifold.
> [...] for [datasets as big as ImageNet] the techniques proposed in this work are also not necessary as agreed by the authors
We highlight that the **main point is not about the number of prototypes to be as high as possible**, but rather to **have a number high enough that allows us to exploit the locality of the views** (through the learning objective) **while supporting the number of classes** (and their complexity).
We clarify that **we don’t mean** (in our rebuttal) **that the high number of prototypes is not necessary to obtain good performance** (as mentioned by the reviewer in this comment). But rather, we mean that **there is a trade-off** between the number of prototypes and classes (and their complexity through their multi-modality) **that must be understood** and considered to select the optimal number of prototypes that will ideally represent each class. However, in our unsupervised setup (and in real-world scenarios where the data is complex and hard to define), this number is not so easy to define and requires experimental evidence to set.
In addition, we show in our experiments (Table B.1 appendix) that CARP is robust to the number of prototypes, and the difference in performance from 65k to 4k prototypes is marginal, which shows CARP's capacity to learn more abstract or more specific prototype representations. We hypothesize that this minimal change is related to the information we get from our learning objective. However, as we decrease the number of prototypes even more (< 4k), the learning problem becomes more complex because the prototypes require a higher level of abstraction, and our self-supervised objective does not provide a sufficient level of information. Consequently, the performance decreases. The understanding of this trade-off and to automatically find ways of keeping the relevant prototypes is future work that we intend to tackle. However, it is out of the scope of our current proposal. Nevertheless, we agree with the reviewer about its relevance.
> Hope to see the motivation validated in larger-scale datasets.
We intend to evaluate our method on larger datasets for our future work. However, to provide such results during the rebuttal is impossible, given our resources.
---
Reply to Comment 1.1.2:
Title: About DINO v2 number of prototypes.
Comment: Based on the following evidence from Dino v2 repository, we would like to emphasize that DINO v2 uses **65536 prototypes for ImageNet-1K pre-training** (github.com/facebookresearch/dinov2/blob/main/dinov2/configs/ssl_default_config.yaml#L45), not 4096 as previously stated. Moreover, to pre-train on even larger scale datasets such as ImageNet 22k (github.com/facebookresearch/dinov2/blob/main/dinov2/configs/train/vitg14.yaml#L2), **DINO v2 uses 131072 prototypes**. These numbers are aligned with our experimental results and intuitive explanation in the comment above. | Summary: This paper addresses a collapsing problem that arises in clustering-based contrastive learning. To resolve the problem, the paper proposes an improved version of consistent assignment in CARL, utilizing a strategy of random partitioning. In particular, the original consistent assignment loss exhibits a stability issue when a large number of prototypes are employed; thus, it is challenging to determine proper hyperparmeters to ensure both stabilization and strong performance. This paper introduces a method of random partitioning among the trainable prototypes and applies the consistent assignment loss across each partition. Such approach effectively mitigates the instabilty problem associated with a large number of clusters while maintaining the benfits from using numerous clusters. Importantly, the proposed method does not introduce any additional hyper-parameters, such as trade-off parameter or sharpening temperature. The experimental results demonstrate the effectiveness of the proposed method in various downstream tasks such as transfer learning, clustering, image retrieval, copy-detection, few-shot classification, and linear/k-NN evaluation.
Strengths: * The method seems simple yet effective, achieving robust performance and preventing collapse without the need for additional hyperparameters.
* The experiments are conducted across various downstream tasks, including transfer learning, clustering, image retrieval, copy-detection, few-shot classification, and linear/k-NN evaluation; the method consistently demonstrates strong performance.
* The method has been meticulously ablated (in main text and supplementary) with various implementation choices thoroughly examined.
* The paper is well-composed and easy to follow.
Weaknesses: * The discussion and comparison to other related baseline methods [1, 2, 3], which similarly employ the entropy of the mean probabilities of a batch while addressing a stabilization of it, seem to be missing.
[1] Self-Supervised Learning by Estimating Twin Class Distributions, arxiv 2021 \
[2] Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment, NeurIPS 2022 \
[3] Masked Siamese Networks for Label-Efficient Learning, ECCV 2022
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: ### Suggestion
* Please address the concerns listed in the weaknesses.
* Given that all experiments are conducted using ResNet-50, experimental results with Vision Transformers (ViTs) shall strengthen the contributions of the paper.
### Question
* Why is it important or beneficial to avoid using non-differentiable modules, such as Sinkhorn-Knopp, for generating target cluster assignments? (L4-5, L99-100, L101-103)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper describes a limitation of the proposed method that the representations learned through CARP do not transfer well to dense prediction tasks. This limitation seems logical, as the proposed method simulates smaller pseudo-classification problems, which might be more suitable for downstream tasks such as classification, clustering, and retrieval.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The discussion and comparison to other related baseline methods [1, 2, 3], [...], seem to be missing.
To address the concerns regarding a proper comparison to the other suggested methods, we took the pre-trained models from their respective official repositories and ran the same benchmark used in the paper for all methods. *These results support the conclusion that CARP learns representations that generalize to future tasks.* We detail the results below.
We took the 400 epoch multi-crop MIRA (github.com/movinghoon/MIRA) and the 850 epoch multicrop TWIST (github.com/bytedance/TWIST/tree/main), both with ResNet backbones. We did not benchmark MSN (github.com/facebookresearch/msn) because it only provides ViTs backbones which invalidates a fairer comparison with CNN-based encoders. We display the results in the table below.
Clustering evaluation, paper's Table 2
\begin{array}{lrrrrrrrrrrrr}
\hline
Method&ImgNet&&&Cifar10&&&Cifar100&&>SRB&&\\\\
& NMI&AMI&ARI&NMI&AMI&ARI&NMI&AMI&ARI&NMI&AMI&ARI\\\\
\hline
TWIST&\textbf{72.7}&\textbf{52.4}&\textbf{28.6}&41.8&41.7&30.4&50.4&43.6&18.5&48&45.6&13.3\\\\
MIRA&68.9&45.7&21.2&39.5&39.4&28.8&49&42.1&17.6&51.6&49.4&15.8\\\\
CARP&70.3&48&23.9&\textbf{49}&\textbf{48.9}&\textbf{38.7}&\textbf{54.5}&\textbf{48.2}&\textbf{23.1}&\textbf{54.8}&\textbf{52.7}&\textbf{19.6}\\\\
\hline
\end{array}
Overall, CARP outperforms MIRA and TWIST on clustering evaluation, mainly in transfer scenarios with more considerable differences in scores. Interestingly, TWIST outscores CARP on the ImageNet dataset. However, **its performance decreased significantly on transfer datasets/tasks such as Cifar-10/100 and GTSRB**. **CARP**, on the other hand, **scores strongly on all 4 datasets.** These results support the conclusion that CARP learns representations that generalize to future tasks.
k-NN transfer learning (k=20), paper's Table 1
\begin{array}{lrrrrrrrrrrrrr} \hline
&& Pets&Flowers&Aircraft&Cars&Country&Food&STL>SRB&Avg @k\\\\
Method&Ep&&&&&&&&&10&20&100&200\\\\
\hline
TWIST&850&83.9&73.4&23.1&20.4&13.9&60.4&\textbf{96.6}&59.1&53.8&53.9&52.8&52.1\\\\
MIRA&400&83.4&\textbf{81.4}&35.6&26.6&\textbf{14.3}&\textbf{64.2}&95.6&64.2&58.2&58.2&56.9&56\\\\
CARP&400&\textbf{86.8}&80&\textbf{42.1}&\textbf{33.5}&12.3&58.4&95.9&\textbf{75.3}&\textbf{60.4}&\textbf{60.5}&\textbf{59.7}&\textbf{59.2}\\\\
\hline
\end{array}
On average, *CARP k-NN transfer performance outweighs both MIRA and TWIST on the 8 transfer datasets.* Individually, CARP wins 4 out of 8 datasets.
As pointed out by the reviewer, both MIRA and TWIST use the entropy term to avoid collapse during training. One important distinction, however, is that CARP maximizes the entropy over the random partitions and not over the entire set of prototypes. *We hypothesize (and show practical results in Table B2 in the appendix) that the random partition pretext task has a regularization effect that stabilizes training, avoids collapse, and improves final performance.* Because the subtasks constantly change, the model is less prone to collapsing all the embeddings into a single prototype. As a result, the regularization effect from the random partition diminishes the importance of entropy maximization. Hence, we do not need to consider tuning the entropy term's contribution to avoid trivial solutions. TWIST, on the other hand, needs to tune it carefully.
> Given that all experiments are conducted using ResNet-50, experimental results with Vision Transformers (ViTs) shall strengthen the contributions of the paper.
We thank the reviewer for the suggestion. It is in our plans to explore how the proposed random partition pretext task would behave with other architectures, such as ViTs. However, we chose ResNets as the main architecture mainly because most SSL methods provide baselines with ResNet backbones. Indeed, recent SSL methods are switching to ViTs. However, given our computing budget, running so many long experiments with more than one architecture would be impractical. For this reason, we decided to pick one base architecture and do the best we could regarding training, ablations, and evaluations to make our proposal clear and robust.
> Why is it important or beneficial to avoid using non-differentiable modules, such as Sinkhorn-Knopp, for generating target cluster assignments? (L4-5, L99-100, L101-103)
We apologize that the main text did not convey this matter clearly. We do not propose to avoid these types of non-differentiable modules, but rather we propose to explore a different solution. We will update our writing to convey this tone better.
Our work provides an alternative solution to the cluster assignment problem with benefits and drawbacks. One benefit of having an end-to-end differentiable architecture might be performance-wise. The Sinkhorn-Knopp is an iterative algorithm that may require extra computation. Moreover, it also increases the number of hyperparameters that must be appropriately tuned. Nevertheless, solutions that employ the Sinkhorn-kopp deal with the extra compute and tuning and produce strong results. On the other hand, our approach shows that one can learn equally good, or even better, generalizable representations using a novel strategy that can avoid collapse in a deep learning integrated architecture.
---
Rebuttal Comment 1.1:
Comment: I thank to authors for responding to my comments with additional experimental results. The authors' response and disccusions with other reviewers addressed most of my concerns about this work. I remain positive about this paper and maintain my score to accept.
---
I add more detailed comments to supplement my position.
I believe that experiments with other architectures, such as ViTs, will strengthen the contribution and justification. The authors replied that they have plans to conduct experiments; i hope those plans will be made.
For discussions and comparisons with related baselines, the authors give additional experimental comparisons in clustering and transfer learning evaluations. While the proposed method doesn't seem to outperform the baselines in all settings, it performs better on average. Although the MSN has not been experimented with, as stated by the authors, i agree that the proposed method has the advantage of not requiring additional hyperparameters (sharpening), loss, or careful tuning, unlike the baselines. This makes the proposed method attractive to me, even though the performance improvement is slight.
I have read other discussions and comments on this page and agree that the idea is very simple, with a narrow and specific usage area, i.e., unsupervised clustering-based methods; thus, the contributions may seem limited. However, considering that there are many variants of works and techniques proposed to address the issue of collapsing, i believe the contribution is not insignificant.
The simplicity offered by the proposed method, which avoids introducing additional hyperparameters, sharpening, or speicialized loss functions, seems to me as an important contribution that distniguishes the method from others. While it is indeed possible to address stability and training through careful tuning, doing so may require multiple rounds of repetitive training and significant computational resources. This makes it difficult to apply these variant methods in various scenarios, e.g., computing budgets are limited.
---
Reply to Comment 1.1.1:
Title: Thanks to reviewer's work
Comment: We thank the reviewer for the fair assessment, valuable insights, and suggestions for our work. We will incorporate the relevant suggestions into the final version. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time reviewing our work and their valuable feedback. We will incorporate the additional results presented in this rebuttal as well as the suggestions in the final version of our manuscript.
In summary, our main contributions are the proposal of a stochastic partitioning process to improve the learned representations' robustness, as demonstrated in our comprehensive out-of-domain downstream evaluation. Moreover, we empirically show that our random partition pretext task produces a regularization effect that stabilizes the training and avoids collapse.
Our comprehensive evaluation protocol assesses the representation power of more than 11 SSL methods over 16 different datasets. We increased this evaluation, as requested, and included evaluation results for three additional methods, TWIST [a], MIRA [b], and CoKe [c]. The additional results are presented in the reviewers' individual replies.
In short, **CARP pre-trained representations remained top performant, mainly on transfer learning scenarios, which speaks for the power of the proposed random partition pretext task**.
In addition, we are providing an extra PDF document containing plots that strengthen the intuition about the sharpening effect of the consistency loss used by CARP.
We highlight that we evaluated our method against other ResNet-based models due to their prevalence in the existing SSL methods. In this light, our evaluation is fairer and demonstrates the improvements of the proposed partitioning while maintaining the backbones. While we share the desire to have more tests as proposed by the reviewers, due to our limited computing budget, we could not evaluate several backbones on our setup and ablations.
[a] Self-Supervised Learning by Estimating Twin Class Distributions, arxiv 2021
[b] Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment, NeurIPS 2022
[c] Unsupervised Visual Representation Learning by Online Constrained K-Means. CVPR 2022.
Pdf: /pdf/b3937b74d91686bc2d3eafa5b6e257920bb848e2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
LEPARD: Learning Explicit Part Discovery for 3D Articulated Shape Reconstruction | Accept (poster) | Summary: The paper studies the research problem of shelf(DINO)-supervised articulated 3D shape reconstruction. The key idea of the paper is to factorize shapes into different primitives, and to model the shape primitives using both global and local deformation parameters. The parameters of the model are optimized in a kinematics-inspired way, where the 3D kinematics are projected into 2D to utilize the DINO supervision. The experiments show the proposed model outperforms previous state-of-the-art methods, including both primitive and holistic reconstruction approaches.
Strengths: 1. The high-level idea of factorizing primitive parameterization into global and local deformations make sense.
2. The performance of the proposed method has been well evaluated and clearly outperforms previous state-of-the-art methods.
Weaknesses: 1. As a non-expert in this field who does not have much knowledge about physics-based deformable model, I find this paper hard to follow and sometimes not self-contained. Not being able to understand some details about kinematics could be caused by my limited knowledge about physics, but, importantly, it's not clear to me at all why the kinematics-based optimization approach is adopted in this paper. Conceptually, I understand optimizing primitives without direct 3D primitive supervision requires some sort of regularization -- but what kind of prior knowledge or physical facts are encoded in this model, given that there are no real forces/materials? What are the assumptions/hypothesis that generate these forces? And what makes the local deformation to be small/local? It is really important to make the high-level intuition clear before dive into the details. I also wonder if one uses the global+local deformable model in this paper but with simple optimization methods (e.g. directly sample points from primitive surface and project to 2D, then minimize the mask L1/MSE/BCE loss), will such simple alternatives not converge/lead to inferior performance?
2. In the quantitative ablation (Fig.6), the performance gap between the model w/o local and the full model does not seem too large (e.g. compared to the gap between LEPARD and Hi-LASSIE in Table 2). What is the typical std/error bar of this evaluation?
Other comments/questions:
1. It would have been beneficial to provide the dataset statistics.
2. How is K decided? (L164)
3. What is the limitation/failure cases of the proposed method?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see the questions in the weaknesses. Overall, the high-level idea of global+local deformations modeling makes sense, and the qualitative results of the proposed method looks impressive. However, I do think the writing of the paper needs major improvement and many details need further clarification.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: A detailed discussion about limitation and failure case is missing now and should be included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: Motivation of kinematics-based optimization approach adopted in this paper.
A1: Thanks for the comment! We realize that we may have made presumptions regarding the reader's familiarity with PDMs, and we appreciate the feedback. Let's break down the approach and the intuition behind our choices:
**Virtual Forces**: The forces we reference in the paper aren't real forces we encounter in physical systems. Instead, these are "virtual forces". They're computed based on the virtual displacement of the surface points of the primitive[33]. They are used to measure how well the points on the primitive surface are deformed to match the target shape.
**Kinematic-Based Optimization**: The choice to employ kinematic-based optimization stems from the conceptualization of the primitive deformation as a Lagrangian dynamic system[33]. This means the points on the primitive surface are continuously deforming to match the target shape during the training iterations, and also the shape-related parameters of the primitive change accordingly. During each training iteration, by minimizing the virtual forces applied to the primitives, we can obtain a reconstructed shape that is closest to the target. However, we need more effective regularization due to the lack of 3D supervision. In addition to directly sampling points from the primitive surface and optimizing them (which might seem straightforward), we seek to optimize groups of shape-related parameters that control the transformations of the primitive, i.e., translation, rotation, global and local deformations. This imposes tighter constraints, and is crucial especially when we do not have any 3D supervision but only 2D image-based evidence. Our method essentially provides more constrained regularization, making the model more robust and accurate in the absence of 3D supervision. We will carefully revise the paper by simplifying the text and adding intuition to make it easier to read.
>Q2: I also wonder if one uses the global+local deformable model in this paper but with simple optimization methods (e.g. directly sample points from the primitive surface and project to 2D, then minimize the mask L1/MSE/BCE loss), will such simple alternatives not converge/lead to inferior performance?
A2: We understand your concern, and added an ablation study using our primitive parameterization with only image force loss (MSE). The results are given in Table. 2 of the Author Rebuttal PDF.
>Q3: In the quantitative ablation (Fig.6), the performance gap between the model w/o local and the full model does not seem too large (e.g. compared to the gap between LEPARD and Hi-LASSIE in Table 2). What is the typical std/error bar of this evaluation?
A3: The local deformations capture the fine-grained shape details and greatly improve the visual quality of the reconstructed shapes, while their effect is not entirely captured by the quantitative metrics that only improve slightly. We also added a std/err bar in Table. 2 of the Author Rebuttal PDF.
>Q4: It would have been beneficial to provide the dataset statistics.
A4: We train our model on Pascal-Part and LASSIE dataset following[35,40], and use the same evaluation protocol as LASSIE.
>Q5: How is $K$ decided? Q6: Limitation
A5, A6: Please see the general Author Rebuttal for details.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. Although I still don't fully understand what the "tighter constraints"/"constrained regularization" is intuitively (I strongly encourage the authors to spend more time on this for a broader audience), my other concerns have been addressed. I am willing to raise my rating (still with low confidence though).
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Hc7X
Comment: Thank you for your constructive feedback! We will carefully revise our paper for better readability. We genuinely appreciate your willingness to reconsider the rating based on our explanations. We'd be thankful if you could take a moment to update the rating in the system. Let us know if you have any questions! | Summary: The paper introduces LEPARD, a framework for reconstructing the 3D shape of animals from single images. LEPARD reconstructs 3D shapes as parts, which are parameterized primitive surfaces with global and local deformations. LEPARD is trained using off-the-shelf deep features without the need for 2D or 3D annotations. Experimental results demonstrate more detailed shape reconstruction.
Strengths: **Method:**
- 3D reconstruction as parts is an interesting approach to 3D reconstruction and is generally less explored than holistic reconstruction. Exploration in this direction can be beneficial to the community.
- The proposed method reconstructs details better by modeling coarse shapes as well as detailed deformations of each part. This is an intuitive approach and appears to be effective.
**Experiments:**
- The comparison with SOTA is thorough and complete to the best of my knowledge. The improvements over SOTA appear to be significant.
- The ablation study clearly demonstrates the effectiveness of each component.
Weaknesses: I have several confusion about the method.
- The model parameters are not defined precisely. It’s unclear how q_c and q_theta are defined ( how are they just c and R? or is there any difference?)
- Details regarding how the local deformation d is obtained from v_s are missing. The concept of stationary velocity field and Gaussian smoothing layer should be explained more in detail to help understanding. Also, the paper should include justification for using stationary velocity field and Gaussian smoothing layer.
- I don’t understand Equation 5. The author introduces the model Jacobian matrix without defining what the model is. It’s also unclear why the velocity x is relevant in this static reconstruction setting. In addition, it’s unclear how Equation 5 is derived.
- In L152-153, what is the practical meaning of “the energy of the primitive” and how is this energy and the force f_3d energy related to 3D shape reconstruction?
- how is the loss in equation 12 derived?
- In equation 14, shouldn’t G^i be part-based (as described in L162 and illustrated in Fig 3) and hence be denoted as G^(i,k)? Otherwise, why is the rendered part mask compared with the mask of the whole object? If G^i is part-based, how are the part labels from the DINO feature associated with the K primitives?
I find it hard to understand the proposed method and hence cannot recommend acceptance given the current state. Please address my questions above
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Questions are listed in weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: There is no discussion about limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed comments! We will release the code for reproduction. We hope the following responses can address your concerns.
>Q1: The model parameters are not defined precisely. It’s unclear how $q_c$ and $q_\theta$ are defined.
A1: $q_c = c \in \mathbb R ^3$ represents the 3D translation of each part, while $q_\theta \in \mathbb R ^4$ is a standard 4D quaternion formulation that represents the 3D rotation of each part. Please also see L134-135 of the main paper and the Notes in the Notation table (Table 3 of the supplementary material).
>Q2: Details regarding how the local deformation d is obtained from $v$ are missing. The concept of SVF and Gaussian smoothing layer should be explained more in detail to help understanding.
A2: The local deformation $\textbf{d}$ is modeled as diffeomorphic point flows, which is a smooth and invertible spatial transformation (diffeomorphic mapping) and is widely used in shape modeling[1,2]. Diffeomorphic mappings allow for the computation of a time-dependent velocity field via an ordinary differential equation (ODE). However, such computations can be intricate and resource-intensive. To address this, a stationary velocity field (SVF) is usually employed [3]. This SVF maintains a constant velocity and simplifies the parameterization of diffeomorphisms. More specifically, a diffeomorphic mapping is achieved as the trajectory and integration of a smooth SVF $v$. The widely accepted practice[2] employs a scaling and squaring layer (S&S) for the integration, and utilizes convolution with suitable kernels, e.g., Gaussian kernels with positive scale, to smooth out the velocity fields and achieve a more refined output - this process is termed the Gaussian smoothing layer. More comprehensive explanations of this can be found in Section A.3 of the supplementary.
[1] Beg, M. Faisal, et al. "Computing large deformation metric mappings via geodesic flows of diffeomorphisms." IJCV, 2005.
[2] Amor, Boulbaba Ben, Sylvain Arguillère, and Ling Shao. "ResNet-LDDMM: advancing the LDDMM framework using deep residual networks." TPAMI, 2022.
[3] Arsigny, Vincent, et al. "A log-euclidean framework for statistics on diffeomorphisms." MICCAI, 2006.
>Q3: I don’t understand Equation 5. The author introduces the model Jacobian matrix without defining what the model is. It’s also unclear why the velocity x is relevant in this static reconstruction setting. In addition, it’s unclear how Equation 5 is derived.
A3: Equation 5 shows how a change in a 3D point $x$ is translated to a change in the shape-related parameters of the primitive. Our approach is inspired by the kinematics of physics-based deformable models [33], which take the shape deformation as a dynamic system and thus the time derivative is used for $x$ and $q$. In our setting, at an arbitrary training iteration $t$, the 3D point $x$ from the primitive surface should match with the $x_{gt}$ point from the surface of the ground-truth shape. $\dot{x}$ a.k.a $d{x}$ shows how this point should change to make this possible and $\dot{q}$ a.k.a $dq$ is the corresponding change in the shape-related parameters of the primitive. Thus, in the static reconstruction setting the model is deformed during training to match the ground-truth shape. Please see Sec. A.4 of the supplementary material for detailed derivation of Equation 5.
>Q4: In L152-153, what is the practical meaning of “the energy of the primitive” and how is this energy and the force $f_{3d}$ energy related to 3D shape reconstruction?
A4: In the context of 3D shape reconstruction, the "energy of the primitive" refers to the amount of virtual work required to deform a primitive so that it aligns with a target shape. Here, "virtual work" is defined as the product of the virtual force and the distance over which this force acts, specifically the distance between the primitive surface and the target shape. During the training process, the goal is to minimize this energy, ensuring that the primitive shape deforms and closely matches the target shape. In other words, the lesser the energy, the closer our primitive is to the desired 3D shape.
>Q5: how is the loss in equation 12 derived?
A5: As we explained in Q4, we try to minimize the primitive energy to deform the primitive to match the target shape. The primitive energy refers to the amount of virtual work defined as the integration of “force” $f_\text{proj}$ over the change of the points $dx_\text{proj}$ (Eq. 11). Using Eq. 5, the change of points $dx_\text{proj}$ is further expressed as the product of Jacobian $L_\text{proj}$ and change of shape-related parameters $d q$. Thus we can convert the minimization of 2D point-wise image forces $f_\text{proj}$ to the minimization of the parameter-based forces (generalized forces) $f_\text{proj}$ using the Jacobian matrix, which leads to Eq. 12.
>Q6: In equation 14, shouldn’t $G^i$ be part-based and hence be denoted as $G^{(i,k)}$?
A6: Since the semantic clusters from DINO features do not essentially match the number of parts, we do not use the part-based ground truth but use the whole mask. During training, all primitive parts are initialized and deformed together to reach the boundary of the mask. To avoid the intersections between primitives, we follow PDMs and check for primitive inter-penetration in each training iteration. If two primitives penetrate each other, we assign two equivalent and opposite collision forces $f_n$ and $-f_n$ that are proportional to the distance between each pair of selected points on the two primitives. These two forces are added to the respective points on the two inter-penetrating primitives, respectively, to adjust the forces $f_\text{proj}$ and thus push the primitives to separate from each other. In this way, the primitives are deformed to different parts of the shape under the kinematic-based optimization. We will add all the necessary details to the revised paper/supplementary.
---
Rebuttal Comment 1.1:
Comment: Thanks for their detailed response and for the clarifications. My questions are properly addressed. It would be helpful to include the clarifications in the final version. I am willing to raise my rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer a5TN
Comment: Thank you for your acknowledgment of our work! We will carefully refine our paper for better clarity. Your willingness to re-evaluate the rating means a lot to us. If it's convenient, please take a moment to update the rating in the system. Do reach out if there are further queries or concerns! | Summary: The paper presents LEPARD, a framework for reconstructing the 3D articulated shape of animals from a single in-the-wild image. It explicitly represents the parts as parameterized primitive surfaces (superquadrics) with global and local deformations in 3D. The authors employ a kinematics-inspired optimization to guide the deformations. Besides, LEPARD is trained solely using off-the-shelf deep features from DINO, without requiring any 2D or 3D annotations. Experiments on the Pascal-part and LASSIE datasets show the superiority of the proposed method.
Strengths: - The paper is clearly written and easy to follow.
- The authors propose to use local non-rigid deformations to capture fine-grained shape details, which is different from previous works.
- The authors propose a framework to compute image-based forces based on the discrepancy of DINO features and the projected primitive parts.
Weaknesses: There are a limited number of categories in the datasets evaluated. Can the authors qualitatively evaluate their methods on held-out images from other sources for the same training categories and some unseen categories (maybe fine-tuning a small set of images)?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In L174, it is mentioned that "the estimated parameters for camera translation and rotation" are used. I am curious whether these camera parameters are trained by the image force only. Are there any additional supervisions or any missing implementation details (e.g., predicting multiple hypotheses as in A-CSM)? Because it can be quite ill-posed to optimize both camera parameters and part parameters.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have not adequately addressed the limitations. It will be nice if the authors can discuss, e.g., 1) the effect of the quality of psuedo labels generated by DINO on the final performance; 2) the bottleneck of the current method for the further improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your strong recognition of our work! We hope the following responses can address your concerns.
>Q1: There are a limited number of categories in the datasets evaluated. Can the authors qualitatively evaluate their methods on held-out images from other sources for the same training categories and some unseen categories (maybe fine-tuning a small set of images)?
A1: Thanks for the suggestion! We have tested on the Objaverse dataset for known categories such as elephants. The results are given in Fig. 2 of the Author Rebuttal PDF. Fine tuning on additional images for unknown categories might be out of the scope of this paper, and we will include it in future work.
>Q2: In L174, it is mentioned that "the estimated parameters for camera translation and rotation" are used. I am curious whether these camera parameters are trained by the image force only. Are there any additional supervisions or any missing implementation details (e.g., predicting multiple hypotheses as in A-CSM)? Because it can be quite ill-posed to optimize both camera parameters and part parameters.
A2: We do not use the multiple camera hypothesis approach from A-CSM [20] and U-CMR [14], but predict a single camera viewpoint similar to LASSIE [40] and Hi-LASSIE [41].
>Q3: It will be nice if the authors can discuss the effect of the quality of pseudo labels generated by DINO on the final performance
A3: Thanks for the suggestion! We added an experiment to test the effect of the DINO feature quality on the final performance. Specifically, we use the Pascal-Part dataset which provides GT masks for the evaluation of DINO feature quality. We sort the images based on the IoU between the DINO feature and the GT mask, and split these images into three groups according to their quality (IoU) rankings i.e., group 1, 2, 3 with the best, medium, and worst quality, respectively. We then train a separate model on each group and evaluate their performance in terms of overall IoU and Part IoU. The average results over all three tested categories are reported in Table. 1 of the Author Rebuttal PDF. We observe that the performance of our approach only slightly degrades with lower DINO feature quality, which may also be due to the fewer training samples.
>Q4: the bottleneck of the current method for further improvement.
A4: The bottleneck is the performance of our method depends on the use of the correct number of parts for reconstruction. While this is not very restrictive, in future work we plan to extend our method so that it works without knowledge of the number of parts. Please see the general Author Rebuttal for details.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed reply and extra results. It shows that the model is quite robust. I would like to keep my rating.
However, I still have a question about optimizing camera parameters in a self-supervised way. Is there a dataset bias (e.g., most animals face toward the camera so that only the range of azimuth is [-$\pi$, $\pi$]? Theoretically, without multiple hypotheses [1], the worst prediction error for a continuous function mapping $R^3$ to $SO(3)$ can be quite large.
[1] Xiang, Sitao, and Hao Li. "Revisiting the continuity of rotation representations in neural networks." arXiv preprint arXiv:2006.06234 (2020).
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 5yZr
Comment: Thank you for your feedback and acknowledgment of our work. In our implementation, we integrate the optimization of camera rotation parameters into the kinematics modeling, which provides more strict constraints and improves the performance of unsupervised learning. As suggested, incorporating multiple hypotheses could lead to even more robust optimization and will be included in our future exploration. | Summary: The paper describes a method for fitting K superquadric geometric primitives (enhanced with tapering, bending, and diffeomorphic local deformations) to a set of images of an animal category (e.g. elephant). The main contribution is that the method requires no supervision, and uses 2D feature correspondence to impose constraints on 3D shapes.
Strengths: Superquadrics with limited deformations offer a good balance between geometric expressivity and parameter compactness.
The end-to-end image supervision of 3D geometry is fundamentally sound. Thus, relying only on 2D feature correspondences, and translating those to 3D constraints, is a robust way to aggregate information from an unstructured image collection.
The results are compelling and a clear improvement on previous work.
Weaknesses: Animal bodies are kinematic chains, where one limb affects another. This approach does not take that in account.
Even though the paper focuses on animals, it would be quite useful to see how it performs on humans and how it compares to human-specific baselines.
The paper could use a section discussing the limitations and future work.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Elephant ears and trunk seem part of the same head primitive in Figure 5. Yet the shape is quite complex. Is that due to local deformation? If so, then figure 6 should be changed to use the elephant example instead of the more minor tiger example.
How do you ensure that rotation matrices R are orthonormal with positive determinants?
How would you incorporate the fact that animals are a kinematic chain (bones connected to each other), rather than a bag of shape primitives?
Did you try running your method on humans? It would be very informative to see human results compared to human-specific methods.
The supplemental material shows interesting mappings between different animals. It would be impactful to show more of that analysis in the main paper. Are there any cross-category benchmarks to evaluate it on?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not explicitly address its limitations in a traditional limitations section.
It is not explicitly stated whether the number of primitives, K, is computed automatically or manually specified. It would be a limitation if it has to be manually specified.
The method treats each primitive separately from others, which does not faithfully represent the kinematic chain that is an animal shape.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, we greatly appreciate your recognition of our work, and thank you for your valuable comments! We hope the following responses can address your concerns.
>Q1: Animal bodies are kinematic chains, where one limb affects another. This approach does not take that into account. How would you incorporate the fact that animals are a kinematic chain (bones connected to each other), rather than a bag of shape primitives?
A1: LEPARD is a general approach that is not limited to animal bodies and can also be applied to a wide range of objects such as inanimate (human-made, natural objects), and animate (animals, humans) objects. While in this paper, we focus on animals, however, our approach can be extended in future work to add kinematic constraints and other types of objects. In future work we plan to extend our approach to create a kinematically constrained primitive-based model of an animal or human, and fit to the target shape data (static or dynamic).
>Q2: Performance on humans.
A2: In this paper we focus on 3D part discovery of articulated animal shapes from 2D images and mainly compare with those methods which work on the same task. In future work we plan to use our method on humans due to its general representation ability.
>Q3: Elephant ears and trunk seem part of the same head primitive in Figure 5. Yet the shape is quite complex. Is that due to local deformation? If so, then Figure 6 should be changed to use the elephant example instead of the more minor tiger example.
A3: The shape of ears and trunk of an elephant are reconstructed due to a combination of tapering, bending (i.e., global deformations) as well as local deformations. In Fig.5 of the main paper we ablate using the shape of tigers to better demonstrate the effect of local deformations.
>Q4: How do you ensure that rotation matrices R are orthonormal with positive determinants?
A4: We predict 3D rotations as normalized 4D unit quaternions, which are converted to valid rotation matrices (that are orthonormal with positive determinants) using a closed-form transformation matrix.
>Q5: The supplemental material shows interesting mappings between different animals. It would be impactful to show more of that analysis in the main paper. Are there any cross-category benchmarks to evaluate it on?
A5: To the best of our knowledge, there are no public cross-category benchmarks to show such 3D part level consistency across species. However, we will add sufficient qualitative comparisons for this to the main paper/supplementary.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: My rating remains.
A further comment on A1:
You cannot claim "LEPARD is a general approach that is not limited to animal bodies" if you only demonstrate it on animal bodies.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer pQ1r
Comment: Thank you for your feedback and additional comment.
Our intention was to emphasize the foundational principles of LEPARD, which are designed to be generalizable. However, we understand that, without empirical evidence demonstrating its efficacy beyond animal bodies, such a claim might be premature. We will ensure that the revised manuscript accurately reflects the scope of LEPARD based on the data presented. In the future, we will actively work on diversifying our dataset and testing LEPARD on a broader range of object categories to substantiate its general applicability. | Rebuttal 1:
Rebuttal: Dear AC and reviewers, we are grateful for the strong recognition and valuable comments of our work. In the following, we will first answer the common concern of all reviewers, followed by answers to each reviewer's comments.
The common concern is mainly the limitation of our approach.
We share the same assumption as LASSIE that all the four-limb animals share the same 3D part structure despite considerable shape variations across species, which allows LASSIE and LEPARD to apply the same number of parts to all four-limb animals. The only difference is that LASSIE uses more strict supervision, i.e., the number of joints K+1 (which determines the number of parts K) and their locations. As opposed to LASSIE, we use the less restrictive common knowledge K=16 for parts to all four-limb animals and we do not define or use the locations of the K (of the K+1 joints). We use kinematic-based modeling which allows us to discover the K parts. In addition, note that the number of parts is a common assumption for unsupervised 3D part reconstruction from a single image[1-5]. The experiments in the paper demonstrate the semantic consistency and reconstruction accuracy of LEPARD, which shows that our method works well in real-world scenarios/images.
One potential limitation is that if we use other than K=16 for the number of parts, the performance may degrade because only up to 16 parts are visible in the images. For the same reason the performance of LASSIE may degrade if they change the number of joints that corresponds to K=16 parts. Our future work will include active part discovery by monitoring motion changes to overcome the need for initializing the number of parts.
[1] Tulsiani, Shubham, et al. "Learning shape abstractions by assembling volumetric primitives." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.
[2] Paschalidou, Despoina, Ali Osman Ulusoy, and Andreas Geiger. "Superquadrics revisited: Learning 3d shape parsing beyond cuboids." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[3] Deng, Boyang, et al. "Cvxnet: Learnable convex decomposition." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[4] Paschalidou, Despoina, et al. "Neural parts: Learning expressive 3d shape abstractions with invertible neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[5] Tertikas, Konstantinos, et al. "Generating Part-Aware Editable 3D Shapes Without 3D Supervision." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Pdf: /pdf/b76838191a01fbfc620343c6bf54dfdda414a3d3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a part-based method to reconstruct 3D shapes in a category-specific manner. Compare against its baseline LASSIE, the paper uses an elegant primitive part representation that could capture both global and local deformations to increase the fidelity of reconstruction. The method also does not need test-time optimization process like LASSIE. The proposed method outperforms LASSIE quantitatively and qualitatively.
Strengths: 1. The proposed primitive part representation is novel. Compare against modeling deformation implicitly, the parameterization of the representation is compact and intuitive for understanding. The new representation increase the fidelity of the reconstruction.
2. No need for test-time processing. The proposed method does not need per-instance optimization and only requires a forward-pass.
3. No 3D or input requirements. The method takes only image as input, without the need of mesh template, category skeleton as additional input. For training, the method is self-supervised and does not need 2D/3D annotations and could be easily scale-up to in-the-wild data.
4. Impressive qualitative results. The visual result shown in Fig. 4 is impressive. The results looks quite strong as the detailed structure of the animal could be fully recovered in a primitive-based approach.
5. Outperform baseline method in all categories.
Weaknesses: 1. The motivation of introducing image force in the training objective is not well explained. As LASSIE adopts simple silhouette loss by differentiable rendering, the paper introduces complex Jacobian matrices and generalized forces. What kind of benefit could we get from this and what is the difference and limitation of using silhouette loss against LASSIE. The author should include more experiments to demonstrate the superiority of such kinematic-inspired procedure.
2. Several questions not fully understand. Please questions below.
3. No discussion of the limitation of the method and when the method would fail.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How does the training objective guarantee semantic consistency. The paper adopts the same process of DINO features as LASSIE used for training. However, unlike LASSIE, the method does not need test-time optimization and the training objective does not explicitly model 2D-3D consistency. The author should explain how they achieve better results than LASSIE in more details.
2. How to choose number of parts for each category? Different from LASSIE, the method does not take skeleton as input, so how to choose number of parts for unknown category. If the number of parts is high, will there be over-segment problem and lose the semantic meaning and consistency?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Please see comments above. It would also be quite interesting if the output parts could be further retargeted to novel poses. The part-based approach is hard for animation against parametric models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, we greatly appreciate your recognition of our work, and thank you for your valuable comments! We hope the following responses can address your concerns.
>Q1: The motivation of introducing image force in the training objective is not well explained. As LASSIE adopts simple silhouette loss by differentiable rendering, the paper introduces complex Jacobian matrices and generalized forces. What kind of benefit could we get from this and what is the difference and limitation of using silhouette loss against LASSIE. The author should include more experiments to demonstrate the superiority of such kinematic-inspired procedures.
A1: The image force is calculated using the pixel-level difference between the primitive parts and the 2D image, and measures how well the primitive surface needs to be deformed to match the ground truth shape in 2D. The image force can be viewed the same as the silhouette loss in LASSIE, which is obviously not enough for both approaches, especially in the scenario of 2D-3D reconstruction. As opposed to LASSIE which uses multiple separate regularization terms resulting in sub-optimal performance, we calculate image forces in 2D and convert them into generalized forces to enable a joint regularization of the shape-related parameters. This allows us to regularize each sub-transformation of each primitive part deformation, and provides more effective constraints during training without requiring knowledge of joint locations. The extensive experiments demonstrate the effectiveness and robustness of our optimization strategy. For comparison, in Table. 2 of the Author Rebuttal PDF, we provide an ablation study using only the image forces in the loss function and demonstrate consistent superior LEPARD performance against baselines.
>Q2: How does the training objective guarantee semantic consistency. The paper adopts the same process of DINO features as LASSIE used for training. However, unlike LASSIE, the method does not need test-time optimization and the training objective does not explicitly model 2D-3D consistency. The author should explain how they achieve better results than LASSIE in more detail.
A2: See also A1 for details on how our method works which is different from LASSIE. We use kinematic-based modeling which jointly constrains the parameters of translation, rotation, and deformations and results in a more robust solution. As opposed to LASSIE, these parameters are learnt and guide each primitive to deform and fit more accurately to a part of the shape. The kinematic constraints result in learning a more consistent parametric transformation among similar animal categories.
>Q3: It would also be quite interesting if the output parts could be further retargeted to novel poses. The part-based approach is hard for animation against parametric models.
A3: Our model is parametric (i.e., each part is fully parameterized by a few shape-related parameters) and thus is naturally suitable for reposing by changing the translation or rotation matrix. Note that since we use fully explicit representation, we can even change the bending or tapering degree of each part by adjusting the global deformation parameters. This is impossible to do with implicitly represented parts as in LASSIE and the other baseline methods since they use MLPs to directly estimate the point-wise deformation field. In addition to being more robust to missing data or gaps in the data compared to implicit methods, our explicit shape representation approach offers shape explainability (e.g., tapering, bending shape information).
---
Rebuttal Comment 1.1:
Comment: Thank for the detailed response by the author. The rebuttal address most of my concerns. Here are my remaining concerns and comments:
1. The image force is better than silhouette constraint from Tab 2 in the rebuttal. Please consider add this exp to the future version.
2. It is encouraged to add the retargeting visualization results as explained in the rebuttal. This could be a clear advantage compare against other implicit representations. It would be useful and interesting to see how it work by changing some bending and tapering degrees.
3. Regarding limitation, I am a bit confused why only 16 parts work, what will happen if we increase/decrease the number of parts. For a more complicated shape, does 16 part sufficient. For a simple shape, does 16 parts make it over-complicated. The limitation should be included and presented more clearly in the latest version.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer ifgQ
Comment: Thank you for your further insights and feedback. Here is our response to the comments:
1. We acknowledge your suggestion regarding the performance of the image force compared to the silhouette constraint, as observed from Table 2 in our rebuttal. In the subsequent version of our work, we will ensure that this comparison is explicitly included.
2. We agree that this can be a great addition, showcasing the practical advantages of our approach compared to other implicit representations. We will incorporate visualizations that demonstrate the modifications to bending and tapering degrees to provide a clearer understanding of our method's efficacy in the context of shape manipulations.
3. For comparison we use the same number of parts as the baseline method. The choice of 16 parts was derived from the anatomical understanding of four-limb animals, capturing the core structure while preserving semantic meaning across different species, despite their shape variations. For object categories other than four-limb animals, the number of parts could be different. The determination of the optimal number of parts will be contingent on the anatomical specifics of the object category in question. We'll investigate this further and provide a more detailed explanation in the updated version. This will give readers a better insight into how the number of parts affects the method's performance for various shapes, as well as any computational considerations. | null | null | null | null | null | null |
A fast heuristic to optimize time-space tradeoff for large models | Accept (poster) | Summary: This paper proposed an algorithm for the rematerialization problem. The proposed algorithm is based on simulated annealing.
Strengths: Please see the questions section.
Weaknesses: Please see the questions section.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - As far as I know, Checkmate has two versions: 1) MILP, 2) Rounded LP. It looks like that the authors consider the rounded LP only. Based on my experience, rounding leads to very low quality solutions especially for complex DAGs. I understand because of gurobi being a commercial solver, it could be difficult for the authors to obtain a comparison for the gurobi version. One solution to this is to use the SCIP solver (maybe use directly its API or via CVXPY) or even use the available MILP solver in Google Or-tools. More importantly, what I want to get at is that it's not clear to me how close to the optimal the solution produced by the proposed algorithm is. This could be numerically studied by taking a smaller graph (maybe 100 nodes or so) and then solving the MILP of Checkmate using SCIP to global optimality. Then, the authors could report how close to this number their solution is. I expect this would make the results more complete.
In summary, I am convinced that the proposed method works better than rounded LP, but also want to point out that rounded LP doesn't work well anyways. I'd be curious to see how far it's from optimal at least for graphs of manageable size.
- Line 202: "Since the exact cost of a node cannot be determined before execution, we assumed that all operators have the same cost". Perhaps the authors consider profiling the operator durations? Or could it be possible to come up with a more sophisticated (than unit cost) way of assigning durations to the operators?
- I am not expecting the authors to be familiar with this paper or include it in their numerical comparisons since it came out very recently, but it's quite relevant: [r1] "Moccasin: Efficient Tensor Rematerialization for Neural Networks" from ICML 2023. The authors of [r1] show that there exists an nlogn-variable formulation (as opposed to the n^2-variable formulation of Checkmate).
- I think this claim in line 311 may need to be made more precise: "This complexity scales quadratically with the size of the input computational graph, making it a time and memory consuming process." The number of decision variables scales quadratically, not the overall algorithm complexity. The overall algorithm complexity is much worse than quadratic since it's a MILP.
- input(n), output(n): Maybe a more standard terminology is to use parents/children predecessors/successors?
- I apologize if this is already mentioned in the text but, what is the difference between "FastSA" and "FastSA Only" in Figure 4?
- How do you define the optimization time in Figure 4? This looks a bit confusing to me, since we know that by running a milp solver with infinite time will get us optimal.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Please see the questions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your feedback emphasizing the importance of comparing FastSA results to Checkmate ILP. In our original paper, we relied solely on Checkmate LP due to licensing issues and the large sizes of the models. However, we now understand the necessity of a comparison with Checkmate ILP to evaluate FastSA's solution quality more effectively.
As detailed in the global author rebuttal, we obtained a trial license of Gurobi and carried out additional experiments. Given Checkmate ILP's inability to find feasible solutions for the models used in our original experiments within a 3600-second time frame, we resorted to using vgg11 and resnet18. As per your suggestion, these models, which are composed of approximately ~100 nodes, were used to compute optimal recomputation plans.
Table 2 in the attached PDF illustrates the optimality gap between FastSA and Checkmate ILP. For vgg11, FastSA's recomputation plan at most includes one additional node than the optimal plan identified by Checkmate. For resnet18, FastSA's plan at most contains 4 more nodes than the optimal Checkmate plan, in a computation graph originally consisting of 171 nodes.
Additionally, Table 1 summarizes the experimental results, incorporating FastSA's results alongside those from the Moccasin paper. The quality of FastSA's solutions can be evaluated here in comparison with the optimal solutions derived by Checkmate or Moccasin. FastSA outperforms both in the case of random layered (RL) models, owing to the substantial flexibility offered by these models in terms of their topological order. FastSA can concurrently optimize this topological order whilst minimizing memory footprint, with almost no additional recomputation nodes.
Although we had to significantly reduce the model size as depicted in Table 2 to obtain the exact solution through MILP with Checkmate, we anticipate that FastSA presents promising opportunities for producing superior solutions than Checkmate for complex and larger models due to the greater degree of flexibility in the model's topological order.
**Q: Line 202: "Since the exact cost of a node cannot be determined before execution, we assumed that all operators have the same cost". Perhaps the authors consider profiling the operator durations? Or could it be possible to come up with a more sophisticated (than unit cost) way of assigning durations to the operators?**
A: Indeed, by utilizing profiled runtime or flops estimation as the node cost, a more precise recomputation plan can be derived. However, there is a limitation when naively profiling a computation graph — the initial memory footprint of that graph should fit within the GPU memory. This becomes a challenge when recomputation is indispensable to fit the model within a single GPU. To negotiate such scenarios, an initial run of FastSA with a unit cost can be performed to decrease the memory usage, followed by profiling, and after that a more suitable recomputation plan can then be developed based on the profiled runtime.
**Q: I am not expecting the authors to be familiar with this paper or include it in their numerical comparisons since it came out very recently, but it's quite relevant: [r1] "Moccasin: Efficient Tensor Rematerialization for Neural Networks" from ICML 2023. The authors of [r1] show that there exists an nlogn-variable formulation (as opposed to the n^2-variable formulation of Checkmate).**
A: Please check the above reply and the global author rebuttal.
**Q: I think this claim in line 311 may need to be made more precise: "This complexity scales quadratically with the size of the input computational graph, making it a time and memory consuming process." The number of decision variables scales quadratically, not the overall algorithm complexity. The overall algorithm complexity is much worse than quadratic since it's a MILP.**
A: We apologize for the unclear explanation. We will revise it in our final version.
**Q: input(n), output(n): Maybe a more standard terminology is to use parents/children predecessors/successors?**
A: We will revise that section in our final version.
**Q: I apologize if this is already mentioned in the text but, what is the difference between "FastSA" and "FastSA Only" in Figure 4?**
A: “FastSA Only” means only FastSA could find a solution and Checkmate LP could not find it within the time limit of 6 hours. We will add the explanation to the caption.
**Q: How do you define the optimization time in Figure 4? This looks a bit confusing to me, since we know that by running a milp solver with infinite time will get us optimal.**
A: Optimization time in Figure 4 is not the total node costs of the optimized computation graph, but the time spent by FastSA/Checkmate algorithm to find a suitable recomputation plan.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses to my questions. The rebuttal resolves most of my concerns. The additional numerical results raise my confidence in the proposed method. Especially the fact that the topological ordering is optimized (so, no need to stick with a highly sub-optimal topological ordering such as random) is an important contribution of this paper, in my opinion. I've raised my score to 6. | Summary: This paper proposed a new method for optimizing recomputation in neural network training. It formalizes the recomputation as a sequence of nodes that indicate the computation schedule. It then uses simulated annealing with segment tree to find a sequence that optimizes throughput with a certain memory budget. Experiments show significantly reduced computation overhead compared to the state-of-the-art optimizer checkmate in memory restricted cases. It also has a significant lower solving time than checkmate.
Strengths: 1. The paper is clean and easy to follow.
2. The formulation of the problem is clear.
3. The proposed method is novel for this problem.
4. The evaluation is instructive and convincing. The real implementation is aligned well with the simulator.
Weaknesses: 1. The contribution is incremental. It does not open a new problem or new angle, and the technique is common.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Figure 3: Why is there no checkmate bar for AVE?
2. Figure 3: Why is there not much difference between FastSA and Checkmate for 50%?
3. Is it true that FastSA cannot beat Checkmate on transformers?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The linearization of the graph and the relaxation used in the algorithm makes the solution sub-optimal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
Acknowledging that our problem formulation shares resemblances with existing work, it's pivotal to highlight that, based on our knowledge, our algorithm is the first algorithm able to perform recomputation in arbitrary computational graphs and can be applied to real-world models involving thousands of nodes making our solution truly innovative and unique in this area. Existing algorithms such as Checkmate were capable of handling prevalent models of their time, such as ResNet. However, with the evolution of model sizes and complexities, these methods fall short when it comes to handling larger models like modern Transformers, which are heavily constrained by the memory capacity of contemporary devices. So, in this sense, our research tackles a problem that's becoming increasingly significant, offering new solutions and greatly pushing the current boundaries of what's achievable.
Although simulated annealing (SA) would be a common idea to solve optimization problems, we utilized segment trees within our SA to reduce the time complexity of re-evaluating the peak memory after the mutations on the computation graph from naive O(n) to O(log n). This technique is the core of our algorithm allowing it to process large graphs over thousands of nodes.
**Q: Figure 3: Why is there no checkmate bar for AVE?**
A: The absence of an AVE bar for Checkmate in the figures is due to Checkmate's inability to find a solution within the six-hour time limit for some instances. This detail is covered in line 252 of the manuscript, but we acknowledge that the explanation might not have been sufficiently clear. For better clarity, a direct explanation will be added in the figure captions.
**Q: Figure 3: Why is there not much difference between FastSA and Checkmate for 50%?**
A: The relative ease of recomputation at this level allows both algorithms to effectively identify good solutions. However, as the budget reduces to 25%, the number of feasible computation orders decreases and the recomputation plans will become more complex, introducing challenges for rematerialization algorithms. This point will be better elaborated in the revised manuscript for improved understanding.
**Q: Is it true that FastSA cannot beat Checkmate on transformers?**
A: When dealing with transformer models, FastSA tends to outperform Checkmate. This is supported by a comparative analysis in Figure 12 of the supplementary materials for transformer models. For the six transformer models in Figure 12 where both FastSA and Checkmate results are available, FastSA has on average 9.6% smaller memory and 4.1% faster than Checkmate.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification.
This paper [1] came out last month, which is later than the neurips submission but very relevant. Could you add a discussion comparing your work with it in the revised version?
[1] Rockmate: an Efficient, Fast, Automatic and Generic Tool for Re-materialization in PyTorch
---
Reply to Comment 1.1.1:
Title: About comparison between FastSA and Rockmate
Comment: We appreciate the reference to Rockmate, of which we were not aware at the time of this paper's submission. We plan to discuss and compare the results in the revised version of our manuscript. In the meantime, we would like to briefly outline the key distinctions between Rockmate and FastSA.
Rockmate adapts a "block sequence" approach for models to reduce the heavy ILP computation associated with Checkmate. This involves precomputing recomputation strategies within blocks for several memory budgets using Checkmate, followed by a dynamic programming optimization of the global memory, named Rotor [Beaumont et al. 2019].
Rockmate's strength over Checkmate lies in its ability to significantly expedite optimization time if the model is a sequential model of identical layers, thanks to the precomputation with Checkmate required only for a single block. However, Rockmate's disadvantage is its restriction to sequential models, and it does not scale for general computation graphs. Particularly, the block partitioning described in Appendix A.4 of Rockmate's paper does not reduce memory sufficiently if the computation graph has significant topological freedom.
Conversely, FastSA caters to any form of computation graphs, and its optimization time is less dependent on network structure, allowing for flexible model modification during the development's trial and error stages.
Given Rockmate's focus on specific models like ResNet50 or GPT2, which are sequential and repeat the same layers, it wouldn't be fair to only compare FastSA to these models. However, we found FastSA to yield better solutions on ResNet50 (-80.6% memory, +10.7% time) compared to Rockmate (-65% memory, +23% time).
We will describe more details about the experimental results in our final revision. | Summary: This paper proposes the Fast Simulated Annealing Algorithm (FastSA), based on the Add-max segment tree and simulated annealing, to optimize memory usage and training time. Furthermore, FastSA introduces grouped nodes to aid the convergence of simulated annealing and effectively reduce the peak memory. It can also be extended to incorporate other memory optimization techniques such as offloading. As a result, FastSA successfully finds optimal solutions, even for large language models where existing checkmate algorithms fail, leading to optimized training speed and memory usage.
Strengths: This paper presents a **novel approach** that differs from the existing checkmate method in three key aspects.
+ First, it formulates the problem as optimizing the sequence of nodes by defining each node to represent a single operator and introducing the concept of Lifetime.
+ Second, it proposes an algorithm that effectively combines the add-max segment tree and simulated annealing to solve this problem.
+ Third, it optimizes the process through grouping.
These ideas are not only highly innovative but also demonstrated to **successfully optimize memory and training time when applied to large models**. This highlights the significance of the findings in this paper.
Weaknesses: **Lack of comparisions**
The FastSA finds the optimal solution based on the concept of lifetime. This approach shares many similarities with the recently proposed event-based optimization algorithm called Moccasin [1]. Specifically, Moccasin also claims to achieve superior performance compared to MLIP-based Checkmate by utilizing time interval information for optimization. However, the FastSA paper does not mention this and lacks quantitative or qualitative comparisons.
[1] Moccasin: Efficient Tensor Rematerialization for Neural Networks
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If a proper comparison between this algorithm and Moccasin is conducted and it is demonstrated that FastSA achieves significantly better performance, I am willing to raise the score to 7 or higher.
+ A qualitative comparison between the Moccasin algorithm and FastSA, highlighting the superior aspects of FastSA.
+ A quantitative comparison between the Moccasin algorithm and FastSA based on experimental results.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors provide a thorough discussion of the limitations and potential future work in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
We acknowledge and appreciate the reference to Moccasin, which was unavailable at the time of our paper's submission, but is indeed an important comparison to make. Consequently, we have conducted further comparisons between Moccasin and FastSA, which we will include, along with a detailed explanation, in our revised manuscript.
**Q: A qualitative comparison between the Moccasin algorithm and FastSA, highlighting the superior aspects of FastSA.**
A: Different from Checkmate and FastSA, Moccasin uses Constraint Programming (CP) to address the rematerialization problem. By focusing on lifetime intervals, Moccasin reduced the number of integer variables in CP to linear size to the number of nodes, compared to the quadratic number of boolean variables in Checkmate. However, the recomputation plans produced by Moccasin may not be efficient enough, because its search space for the solutions is $O(2^{Cn} + n^{Cn})$ and is significantly narrower than Checkmate’s $O(2^{n^2 + nm})$ or FastSA’s, where C is the hyperparameter representing the maximum number of recomputations for each node and in the open-source implementation, it is configured to C=2. The solution space of FastSA is even larger than Checkmate, meaning that FastSA can find even better recomputation plans than both Checkmate or Moccasin. Setting C=2 is practical in terms of execution time, but then the memory reduction on sequential models is limited to O(√n), whereas it is feasible to achieve O(log n) or O(1) memory by allowing more than two recomputations for each node.
In fact, the experiments in the original paper of Moccasin (also cited in Table 1 of our attached pdf file), were conducted for 90% and 80% memory budgets, where the peak memory can be reduced with a small number of recomputations. However, as shown in our original experiments, we can even reduce memory to 25% given modern large-scale models. In such situations, the constraint of C=2 in Moccasin can significantly limit the solution quality. By setting a larger C value, the space of candidate solutions increases exponentially, limiting the potential applicability of aggressive memory reductions. For more explanation on the size of search space, please refer to Table 1 of Moccasin paper.
**Q: A quantitative comparison between the Moccasin algorithm and FastSA based on experimental results.**
A: As shown in Table 1 of our attached pdf, FastSA consistently identified solutions with a smaller Total Duration Increase (TDI), which refers to the recomputation overhead, faster than Moccasin in many instances. Notably, for the random layered (RL) case, FastSA successfully minimized memory consumption up to 4.9% smaller TDI than Moccasin without the need for additional recomputation nodes, simply through optimizing the topological ordering. While Moccasin operates faster than Checkmate, it takes approximately an hour for cases nearing 1000 nodes, whereas FastSA can find superior solutions in a fraction of this time.
It's important to note that constraint optimization problems, as addressed by Moccasin and Checkmate, generally demand a substantial computational time to arrive at feasible solutions, more so when the solution space shrinks. As a result, even with equal node numbers, the computational time escalates considerably as the budget tightens. This is evident in Table 2 of our additional materials, which illustrates an example of Checkmate results. Checkmate took more than 40x longer optimization time for resnet18 when the budget was tightened from 80% to 60%. Accordingly, the benefits of employing FastSA are even more pronounced when navigating tighter budgets as the optimization time only increased up to 5% when the budget was changed from 90% to 50% in the case of resnet18.
We trust that we have adequately addressed your concerns. Thank you once more for your thoughtful review.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I sincerely appreciate the author's effort and response. Thanks to this, I now clearly understand what advantages FastSA has over other algorithms like Moccasin. | Summary: This paper introduces a fast simulating annealing combined heuristics approach for gradient checkpoint/recomputation.
The solution achieves a significant memory reduction of 73% with an average recomputation cost of 18%.
It outperforms the state-of-the-art MILP-based technique Checkmate in terms of runtime by orders of magnitude, while still meeting the memory budget requirements.
Unlike Checkmate, which utilizes MILP optimization on the graph, this work employs a sequence of nodes to represent values and a binary variable LS(v, t) -> {0, 1} to indicate if a value v is in memory at time t. The optimization objective is defined as the maximum of the resource utilization function M and the budgeted compute cost C. To optimize the solution, the paper utilizes an add-max segment tree structure for efficient interval evaluation. The optimization algorithm employed is simulating annealing (SA). The approach allows for three mutations in the sequence: add, remove, and rotate, enabling effective exploration of different configurations.
Strengths: 1. The approach presented in this work introduces a novel and scalable solution for addressing the recomputation problem. Rather than formulating the problem as MILP which can be computationally expensive, this work reformulates it using sequences of nodes and utilizes an efficient data structure to evaluate its objective function. The problem is then optimized using the well-known simulated annealing algorithm, eliminating the need for a commercial solver.
2. By adopting this approach, this work manages to reduce the memory requirements to meet the specified 25% and 50% memory budget without incurring high computation costs.
3. The optimization time of this approach is notably faster, around 3-4 orders of magnitude compared to the previous state-of-the-art technique Checkmate.
Weaknesses: 1. The baseline Checkmate is run using the open-source OR-tool instead of Gurobi. It not only affects the runtime (which can be orders of magnitude longer) but also the quality of the results of MILP.
2. The comparison of end-to-end training time between the proposed approach and Checkmate is not provided. The increase in compute cost may not necessarily lead to improved performance if the workload is not compute-bound.
3. The topic could probably be better evaluated and discussed at a systems conference.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It would be important to re-evaluate the Checkmate results using Gurobi.
2. Can you show a training time comparison of the solutions generated by SA and Checkmate?
3. Can you provide some explanation of why Checkmate as an ILP failed to meet the memory budget requirements? Are the constraints not precisely formulated in Checkmate?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: - The grouping heuristics for nodes might not be optimal
- There are many other GPU memory usage factors to consider in the optimization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. In our original experiments, we compared our algorithm to Checkmate LP, instead of the exact ILP, using open-source LP solver OR-Tools due to the prohibitive cost of commercial ILP/LP solvers. However, we acknowledge that the evaluation of the solution quality of our algorithm is important and we procured a trial license of Gurobi and conducted an initial exploratory study. Due to the short rebuttal period, we did not replace all of the Checkmate LP results solved by an OSS solver with Gurobi. Instead, we concentrated on verifying the optimality of FastSA’s solutions by solving Checkmate ILP (no relaxation) with Gurobi. We will replace the results in the next version of the manuscript.
Also as summarized in the global author rebuttal, we first tried to evaluate all of the models used in our original experiments by Checkmate ILP with Gurobi, but even for smaller models within 1000 nodes, none of them was solved within the time limit of 3600 seconds. Therefore, we prepared much smaller instances (vgg11, resnet18) for calculating optimal solutions.
Table 2 in the attached pdf shows the optimality gap between FastSA and Checkmate ILP. For vgg11, FastSA’s recomputation plan contained at most one more node than Checkmate optimal. For resnet18, FastSA’s plan contained at most 4 more nodes than Checkmate optional, where the original computation graph has 171 nodes. In addition, Table 1 summarizes the experiment results by adding FastSA's results to the table in the paper of Moccasin, a recently published recomputation algorithm at ICML 2023. Here, we can also discuss the solution quality of FastSA, compared to Checkmate/Moccasin optimals. In the case of RL (random layered), FastSA's results are better than Checkmate's or Moccasin's. This is because there is a high degree of freedom in model topological order in the RL cases, and FastSA can simultaneously optimize the topological order while minimizing the memory footprint with almost no additional recomputation nodes.
**Q: It would be important to re-evaluate the Checkmate results using Gurobi.**
A: Please kindly refer to the above comments and the general response for more details on this.
**Q: Can you show a training time comparison of the solutions generated by SA and Checkmate?**
A: In our paper, we did not directly assess the end-to-end training time. Instead, akin to prior work on recomputation [Kumar et al. NeurIPS 2019, Kusumoto et al. NeurIPS 2019, Jain et al 2020, Bartan et al. 2023], we focused on quantifying the additional computational time required when the memory budget for the model is constrained. However, an estimation of end-to-end training time can be inferred from our optimization results. Generally, the memory used in training directly corresponds with the batch size. Hence, if the memory is reduced to 1/k, it enables handling a batch size that is roughly k times larger. Assuming an average increase in computational time per operator, symbolized as c (a value estimable through benchmarks), the throughput of end-to-end training is likely to improve by a factor of k/c.
For distributed training, data transfer bandwidth can pose a bottleneck rather than the arithmetic computational cost, rendering the end-to-end experimental setup more complex. However, as detailed in Appendix B.3, FastSA can be adapted to problem settings that include data transfer. With a suitably defined objective function, it may potentially accommodate these scenarios in the future.
**Q: Can you provide some explanation of why Checkmate as an ILP failed to meet the memory budget requirements? Are the constraints not precisely formulated in Checkmate?**
A: The experiment results of Checkmate in our paper are all solutions from the LP relaxation problem setting, not the ILP. Since SCIP, an open-source ILP solver, could not solve any instance in our original experiments within the time limit of 6 hours, we decided to solve the relaxed problem. The solution obtained from LP relaxation is generally not an integer, and the memory budget constraint may be violated during the randomized rounding process. More detailed explanations about this can be found in Section 5.2 of the original paper of Checkmate [Jain et al. 2020].
As detailed previously, to evaluate the solution quality compared to the ILP optimal, we added extra results on vgg11 and resnet18 in the attached pdf file. Please note that even with Gurobi, we could not find optimal solutions within 3600s even for the smallest models used in our original experiments.
We hope this adequately addresses your concerns and appreciate your feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses to the questions. The rebuttal has addressed the majority of my concerns. I raised my score from 5 to 6. | Rebuttal 1:
Rebuttal: Dear reviewers
We appreciate the insightful feedback. The review comments have been proven to be very valuable and help to increase the quality of the paper. Here, we provide additional results regarding two common concerns; (1) Comparison with Moccasin [Bartan et al.], a rematerialization algorithm recently published in ICML 2023, and (2) The solution quality of FastSA (our algorithm), compared to Checkmate when solved with the exact ILP (integer linear programming), using a strong commercial solver, Gurobi.
For other questions, please refer to the individual replies.
**Q: Comparison of FastSA and Moccasin [Bartan et al. ICML 2023]**
A: We have benchmarked FastSA against Moccasin using the publicly available data that the authors released and observed that FastSA is ~100 times faster and could find solutions with less or equal recomputation overhead than Moccasin for 10/12 cases when evaluated in graphs with ~1000 nodes.
Moccasin formulates the problem of rematerialization by constraint programming (CP), which is similar to Checkmate’s ILP. Compared to Checkmate, Moccasin reduces the number of boolean variables from quadratic to linear, resulting in faster execution.
However, the scope of feasible solutions within Moccasin is significantly limited owing to the role played by the C hyperparameter, which represents the maximum number of times a value can be rematerialized. In practice, C is set to 2 because the search space of Moccasin is $O(2^{Cn} + n^{Cn})$ and thus its computational time grows exponentially for larger C. Specifically when dealing with sequential models, this setting of C=2 imposes a limit on the memory reduction to O(√n), despite the existence of recomputation sequences with O(log n) or O(1) memory utilization that can be achieved by both FastSA and Checkmate.
Table 1 in the attached pdf file shows the comparison among Checkmate, Moccasin, and FastSA. For RL (random layered) graphs, FastSA is able to find better solutions than both Checkmate and Moccasin. Especially, for 90% budget cases such as RL1 and RL2, FastSA reduced the memory budget up to 22.6% just by optimizing topological ordering without adding extra recomputation nodes.
**Q: How close to the optimal the solution produced by FastSA is?**
A: In the original version of the FastSA paper, a comparison with Checkmate ILP was not performed due to the prohibitive cost of commercial ILP solver licenses. Additionally, open-source solvers such as SCIP failed to complete any of the experiments within a reasonable time frame. However, as pointed out by reviewers wXFY and w4oG, a rigorous comparison with an ILP solver is necessary to ensure fairness.
To accommodate this, we procured a trial Gurobi license and reiterated all experiments on relatively small models (within 1000 nodes, 50% budget) in the evaluation section of the paper after cross-verifying our Checkmate ILP+Gurobi implementation with the Checkmate paper experiments. Regrettably, even with Gurobi, the experiments could not be completed within a time limit of 3600 seconds due to the considerable size of the models involved. As per reviewer wXFY's suggestion, we have incorporated graphs with approximately ~100 nodes such as vgg11 or resnet18 and obtained their optimal solutions using Checkmate ILP+Gurobi.
Table 2 in the attached PDF presents a comparison between Checkmate and FastSA for smaller instances. For vgg11, under the tightest budget constraint (80%), FastSA identified the same recomputation plan as Checkmate. For resnet18, wherein the tightest budget that allowed an optimal solution was 60%, FastSA's Total Duration Increase (TDI), which means recomputation overhead, was 1.2% above Checkmate's TDI. Even though FastSA did not determine an optimal recomputation plan for this scenario, its runtime is almost 100 times faster than Checkmate's, suggesting its potential for integration into neural network compilers to optimize a diverse range of large models
Table 1 also compares Checkmate ILP and FastSA. For the RL cases, FastSA found solutions with a maximum TDI of 0.3%, whereas all Checkmate solutions had a maximum TDI of 0.8%. In these scenarios, FastSA's solutions superseded those of Checkmate as FastSA was successful in reducing the peak memory through optimizing the topological ordering. Please refer to Appendix A.1 for a more detailed explanation of this constraint in Checkmate when dealing with computation graphs that offer high topological flexibility.
Pdf: /pdf/bc64e4aa27c42882a21a2f41d2d14e44b7e0cb64.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Multi Time Scale World Models | Accept (spotlight) | Summary: The paper addresses learning predictive world models that operate at multiple (i.e. 2) time scales. At the slower time scale, the belief over the “task” (i.e. the high-level state) is updated at every H time steps by aggregating the influence of the low-level observations and actions received over that period. The transition dynamics for the fast time scale model resemble those for a standard state space model, with the exception that the they are conditioned upon the high-level state. Throughout, the authors utilise linear transition models in latent space, with Gaussian process and observation noise.
Strengths: * The paper addresses an important problem that is under-explored.
* For the most part the writing is quite clear.
* The experimental results appear strong in comparison to a number of baselines.
Weaknesses: * It is unclear whether the experimental results support the authors’ claim that their model can more accurately capture system dynamics because of the two-time scale approach, or whether their method outperforms the baselines to due to other factors (such as the imputation-based training). An obvious baseline of updating both the low-level and high-level states at the same frequency (i.e. setting H = 1, and therefore removing the multi time scale aspect) is missing. Please see the questions.
* In the preliminaries, the authors imply that they are building upon the locally-linear state space model from [1], that learns linear dynamics that are *conditional upon the state*. However, the authors model the dynamics in latent space to be linear in the latent state, action, and task context (Equation 5), and this linear transition model is *state-independent* (Line 150, i.e. it is the same linear dynamics for all states). It is unclear how the approach can achieve such strong performance with a completely linear model in a fairly low-dimensional latent space. This is further confused by the fact that Section 3.2 is contradicted by Appendix A.3 which instead tells us that the dependence on the actions is in fact non-linear, and learnt by a multi-layer MLP. Please see the questions.
* I think the paper could be stronger if it presented a more general framework (e.g. arbitrary number of hierarchy levels, arbitrary non-linear dynamics (non-Gaussian)), and then presented their 2-timescale, linear Gaussian model as one example of that framework that is computationally simple. Presenting a more general framework would help to set the groundwork for future works that consider the problem of multi time scale world models.
*Minor comments*
- Covariance matrices Q and R do not appear to be defined.
- Line 31: “under non-stationary” words appear to be missing from this sentence
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * How do the results support the case that it is the multi- time scale modelling of your approach that leads to accurate long-term predictions, rather than other factors such as the imputation-based training? From the results, it appears that the imputation-based training scheme (which is a fairly common technique) is crucial for obtaining strong performance. Were the baselines also trained in this manner, or were they only trained to make one-step predictions?
* Can you include an ablation where *H* is set to 1 in your approach (i.e. there is only a single time scale in the model)? This would help to strengthen the case that this work demonstrates that the multiple time scales lead to improved long-term predictions.
* Can the authors explain how they achieve strong results with an entirely linear model as stated in Equation 5? The authors do not appear to use the state-conditional linear model from [1], as they state the linear model is state independent. In principle, we might expect that there exists a high-dimensional latent space where the dynamics are linear, but the authors use fairly low-dimensional latent spaces (e.g. 30 or 60 dimensions). Thus, I find it difficult to understand how the authors approach can outperform methods that allow for non-linear latent dynamics.
* Equation 5 (fully linear) appears to contradict Appendix A.3.1 (non-linear action model). Can you please explain this?
* How should the results in Section 6.4 be interpreted? It is unclear to me how the log-likelihood are supposed to indicate good uncertainty estimates. Are these values the negative log-likelihood of real trajectories evaluated under each model? In which case, wouldn’t the higher likelihood values of MTS3 just indicate that the model is more accurate (not that the uncertainty estimates are better).
I generally like the paper, and I am open to increasing my score if I feel these questions are adequately addressed during the rebuttal period.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations section in the appendix is appreciated. A discussion of any limitations associated with the assumption of state-independent linear Gaussian latent dynamics would be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for posing these critical questions about our model and the suggestions given. We would attempt to answer these with the following paragraphs and global comment (and attached rebuttal pdf document).
**Weakness:** It is unclear whether the experimental results support the authors’ claim that their model can more accurately capture system dynamics because of the two-time scale approach, or other reasons like imputation. **Question:** Can you include an ablation where $H$ is set to 1 in your approach?
We perform ablation for different values of $H$ (everything else remains fixed) in the rebuttal pdf. A value of $H = 1$ resulted in training instabilities and NaN values. The lowest $H$ value that resulted in stable training was 2. As seen in Figure 1 of the rebuttal pdf, smaller $H$ values (relatively fast time scale) for the task levels give significantly worse performance. Furthermore, we devised a "Flat-MTS3" variant with only 1 hierarchy, but all other parameterization and training schemes including imputation kept fixed. This again results in significantly worse performance indicating that it is, in fact, the multi-time scale approach that results in strong empirical results over the baselines.
In the section below we suggest a more systematic way of choosing the $H$ value for an MTS3 with an arbitrary number of hierarchies.
**Weakness** I think the paper could be stronger if it presented a more general framework (e.g. arbitrary number of hierarchy levels, arbitrary...)
We had to limit our definition to 2 timescales because of space constraints. But a definition and details of a generic $N$-level MTS3 is defined in the "Global Comment" section. We also give a general rule of thumb on how the discretization step can be chosen by the rule of thumb (without hyperparameter search) there. We kindly request the reviewer to read the details in the global comments section.
A section for a generic definition of an $N$ Level MTS3 can be included in the final script.
**Question** Were the baselines also trained in this manner or were they only trained to make one-step predictions?
The baselines GRU, LSTM, RKN, HiP-RSSM were in fact trained with imputation schemes as in previous works. Moreover, we also tried a variety of imputation strategies that were not used by the original authors for these models to make them as fair a comparison as possible. However, none of those made much difference for “flat” / single time scale models (deterministic or stochastic). So this also provides evidence that it is, in fact, this multi-time scale formulation that results in a huge improvement in performance.
The ablations with different $H$ in Figure 1 of the rebuttal document and also the prediction plots in Figure 2 further strengthen our claim and we hope this addresses the reviewer's concerns.
**Question** Equation 5 (fully linear) appears to contradict Appendix A.3.1 (non-linear action model). Can you please explain this?
We apologize for causing this confusion and the reviewer is right regarding this. The control model $B$ for the fast ($fts$) SSM is assumed to be non-linear, everything else is fully linear based on insights provided in [7] which studies the effects of action conditioning in great detail. Since the primitive action is known and not uncertain (zero variance), unlike the state transition model $A$ or the task transformation model $C$, we do not need to constrain the model $B$ to be linear as it neither affects the Kalman gain nor the closed-form **covariance** updates in the Kalman predict step.
Also, we think this is a matter of interpretation. The non-linear action model $b$ (mentioned in Appendix), can also be thought of as an encoder which projects the actions to a linear latent space and the matrix $B$ can be interpreted as the identity in this case ($B=I$). Thus, the dynamics still remain fully linear in the latent space.
We will clarify and make a discussion on this in the final version of the paper.
**Weakness:** In the preliminaries, it is unclear .. strong performance with a completely linear model. **Question:** Can the authors explain how ... entirely linear model .. in Equation 5?
We would also like to refer to works [11,12] that marry deep learning and Koopman Theory where finite-dimensional linear embeddings are learned with deep encoders for modeling non-linear dynamics.
However, the real workhorse behind MTS3's surprisingly good results is its capability to model dynamics at multiple temporal abstractions in a top-down fashion with upper levels reconfiguring the lower levels. The multi-time idea has been in discussion in the machine learning community for some time. For eg, the idea of the Hierarchical JEPA architecture proposed (but not implemented) by LeCun in [2] is very similar to what we have in MTS3. We essentially confirmed the hypothesis proposed by LeCun et al. by formalizing and implementing it.
**Question:** How should the results in Section 6.4 be interpreted? ... (not that the uncertainty estimates are better).
We would like to refer to some highly cited literature [8] on how marginal (Negative) Log Likelihood is a combined metric that measures both accuracy and uncertainty estimation of regression tasks. Similar metrics as ours were also used in related literature [9,10] to quantify the uncertainty of probabilistic models. We further plot the uncertainty in predictions (shaded regions) in Appendix C and also the rebuttal pdf (Section 2) to further get an intuitive feel. But we agree that there is still room for improvement in improving the quality of uncertainty estimates and also using better calibration metrics.
We hope the responses here and in the global comment section were able to address most of the questions/concerns raised by the reviewer and we remain hopeful that they would consider raising the score. References are listed in the global comments section.
---
Rebuttal Comment 1.1:
Comment: Thanks for the improvements that you have made to the paper during the rebuttal week. My key concerns have been addressed - I think the experiments now more clearly demonstrate the utility of the approach. I will raise my score to a 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and for increasing the score! | Summary: The paper proposes a formalized multi-scale world model, which works at two timescales: a fast-timestep module that predicts individual timesteps, and a slower one that is only updated over fixed number of steps. The slower module defines a "task" that controls how the fast module functions, and the slower module reads abstract observations/actions over the time window. The paper derives Bayesian updates and ways to optimize the models. Experiments on three different datasets indicate that the proposed method is significantly better at predicting steps long into the future over baselines, and is also able to better predict the uncertainty.
Strengths: - Formal derivation of the method, including the derivation of the uncertainty bounds.
- Comparison against large number of valid baselines in multiple different settings, and all results indicate the proposed method is better
- Ablation studies on which components of the proposed method are important
- Experiments on both simulators and robots
Weaknesses: - Some weaknesses in the baselines:
- Only the proposed method had "two-layer" approach timewise, while others methods were "single-layer". While the formal "two-layer" approach was the main selling point of the proposed method, one can also naively create similar effect with RNNs or transformers
- E.g., see this paper for two-layer RNN in Section 2.1: Jaderberg, Max, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie et al. "Human-level performance in 3D multiplayer games with population-based reinforcement learning." Science 364, no. 6443 (2019): 859-865.
- Alternatively, a "single-layer" approach of the proposed method could also be included to better demonstrate the value of "two-layer" approach. However, if two-layer version outperforms single-layer approach, then two-layer results of the other baselines is even more warranted to ensure the benefit is not alone from two-layer approach.
- Transformers used were rather small (4 layers, ~100 dimension), but many of the previous work has shown that larger transformers perform better (e.g., + 8 layers, 500 dimensions). While larger models are hard to use in robotics (not much compute / tight time constraints for control), testing the scalability of the different methods in terms of parameters to train would provide a better picture of the methods.
- If larger transformer turns out to be better at world modelling without big impact in inference time, this method still has the benefit of capturing uncertainty, and can potentially be a better fit in planning.
- While paper does mention the potential applications of the world modelling approach, there are no experiments to demonstrate this usefulness. The results seem positive for the proposed method (better at modelling the world), but it is unclear how useful this is down the line. For example, how accurate do you have to be to perform good planning for control?
- No code available. The paper does detail the algorithm and setup used to great detail, but the code might contain details that researchers would need to replicate experiments. Small changes to the underlying libraries or how data is pre-processed may have big effects, or there might be parts in the code that were not reported in the paper. Any code, even if messy, is better than no code.
- Given the complexity of the algorithm, having at least a pseudocode to refer to is crucial for correct implementation in the future.
- Example of how code-level implementation details matter: Engstrom, Logan, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. "Implementation matters in deep rl: A case study on ppo and trpo." In International conference on learning representations. 2019.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) Transformer models are known improve in many different tasks as you scale them up in terms of parameters (or: number of layers, dimension size and number of heads per layer). The transformer models used in this paper were rather small (e.g., ~100 dimensionality, few layers). Did you experiment with larger model scales? However, even if transformer models get better at scale, they become slower to infer with, which can be an argument against them while employed in robotics with tight time and compute constraints.
2) How long did it took to train each of the baselines, and how long does it take to run the model (e.g.,, infer steps 5 seconds in to the future)? Knowing the time-to-train and time-to-infer for each method, even if different code was ran on different pieces of hardware. This could strengthen the proposed method's case if it was faster than transformers.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Authors acknowledge the limitations of the work, and propose good paths to extend the work. Authors also discuss the broader impact (no immediate societal impact from the work).
## Rebuttal acknowledgement
I have read authors' rebuttal which addressed my concerns, and raised my score from 4 to 7 and confidence from 2 to 3 (before discussion period closed).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and suggestions. Here are our replies to some of the weaknesses and questions listed.
**Weakness:** Transformers used were rather small (4 layers, ~100 dimensions), testing the scalability of the different methods in terms of parameters to train. **Questions:** The transformer models used in this paper were rather small (e.g., ~100 dimensionality, few layers). Did you experiment with larger model scales? ... inference time/parameters in robotics with tight time and compute constraints.
Large transformers perform well and generalize when there is no scarcity of data, but it is unfortunately not the case with Robotics. We extensively tested different hyperparameter settings before arriving at this architecture. Larger models resulted in overfitting.
As the reviewer suggested, we are providing an analysis of the number of parameters and inference time for Transformer Vs MTS3 across two datasets in Table 1 in the **rebuttal pdf**. As seen in the table, MTS3 uses a fraction of parameters as transformers.
**Questions:** How long did it take to train each of the baselines, and how long does it take to run the model? ...
We report a comparison for MTS3 vs Transformer baselines for training and inference time in Table 1. As far as training time is considered, transformers scale better, due to the high parallelization possible at this stage. However, GPT-like Autoregressive transformers used widely in literature take several times more inference time due to autoregression (something not desirable in online robotic deployment). The non-autoregressive Multi-step transformers are faster but have the drawback that it's not flexible as the number of timesteps it can predict ahead is fixed and is hardcoded into the output decoder. Also, note that our code is not optimized as the standard transformer libraries. The performance was evaluated on similar hardware for all models.
**Weakness:** Only the proposed method had a "two-layer" approach timewise, while others methods were "single-layer"... Alternatively, a "single-layer" approach of the proposed method could also be included to better demonstrate the value of the "two-layer" approach.
We include an extensive ablation in the rebuttal pdf, with varying discretization parameters (H) and a "Flat-MTS3" which works at a single time scale as the reviewer suggested. All other aspects of the model were fixed including training regime with imputation. It clearly shows the advantage of learning the dynamics with observation/action abstractions at multiple time scales.
Additionally, we referred to the paper suggested by the reviewer, Jaderberg et al. "Human-level performance in 3D multiplayer games with population-based reinforcement learning." Science 364, no. 6443 (2019): 859-865. Unfortunately, since there is no public codebase available for the paper, we modified our existing LSTM/GRU baselines, with an additional slow-moving LSTM/GRU Cell as shown in Figure 2 of the reference. The paper shows a "bottom to top" architecture, where the low-level states of the fast RNN update the higher-level slow RNN. The hidden states of both RNNs were decoded to make predictions.
Our experiments on both Mobile and Hydraulic robots resulted in worse performance than the single-layer LSTM for this particular architecture. However, we think it would be a fairer comparison only if we have access to their actual codebase or more architectural details.
Also, the 2-layer bottom-up RNN in the said reference was designed to capture the non-markovian state representation in the context of model-free RL policies, and we do not think this was meant for multi-step ahead predictions. We think that a Top to Bottom hierarchy for the latent variables as in our case is essential for making long-term predictions. Ideally, the model should first make the easier higher-level task predictions (more abstract information without worrying about too many details as in lower levels), independent of the lower-level tasks and use the higher-level predictions to condition predictions on lower-level windows. MTS3 maintains this top-to-bottom causal relationship between latent variables at multiple temporal abstractions in a principled way.
We would also request the reviewer to take a look at the global comment section, where we place our work in a different light. We think principled formalisms (e.g., SSMs, MDPs, Semi-MDPs in hierarchical RL context) are important for algorithmic advancements, and we lay a foundation for such a formalism for learning models of the world at multiple temporal abstractions.
**Weakness:** No Code Available
We submit a link to an unofficial version of our codebase [here](https://drive.google.com/file/d/103fbLz1ahrfbSCGN0Ko58A7v76EVLPlu/).
We hope we were able to provide sufficient answers to the questions posted by the reviewer and hope that the reviewer would look at the work in a positive light. Please let us know if you need any further clarifications.
---
Rebuttal Comment 1.1:
Comment: Thank you for you answers! I especially enjoyed the explanation and clarification of top-to-bottom vs. bottom-to-top approach and differences between the paper I linked, which I do see as not being compatible here. I also agree with the view that this provides more theoretical fundamental ground for world-models, and the exact implementation with RNNs/Transformers/whatnots is future work that build on this.
I have one additional question: how was the evaluation done, exactly? This is how I understand it:
1) Take a real trajectory from the dataset of length N + M
2) Take the first N timesteps, and provide these as ground-truth context to the model
3) Model predicts the remaining timesteps M
4) Compute RMSE between true timesteps in the trajectory vs. predicted steps
Is this correct? How were the robot actions handled? Were they predicted as part of the model, or do you provide the real actions from the trajectory? If I understood the description right, only states are predicted, but I am asking just to be sure.
---
Reply to Comment 1.1.1:
Comment: Thank you for the reply to the rebuttal and your understanding.
The reviewer is right regarding the evaluation setup. We collect a ground truth trajectory of observations and actions of length (N + M), from an agent. During inference with MTS3, the first N observations are given as input context (to observation/task encoders), while the rest of the M observations are masked. The model is now tasked to decode N + M observations, conditioned on N + M **known** action/control inputs.
We calculate the RMSE between the predicted/decoded M observations and ground truth observations.
Only observations are predicted/decoded and not actions as they are known (we make action-conditional future observation predictions).
We hope this answers your question! Please let us know if you need any more clarifications. | Summary: Looking to tackle the lack of temporal granularity in existing world models, the paper proposes a multi-time scale linear Gaussian state space model (MTS3). The model uses an efficient closed-form inference scheme on multiple time scales for highly accurate long-horizon predictions and uncertainty estimates over longer horizons. The experiments focus on action conditional long horizon future predictions, showing that MTS3 outperforms recent methods on several system identification benchmarks, including the D4RL dataset, a simulated mobile robot, and real manipulators including data from heavy machinery excavators.
Strengths: - Multi-Time Scale Predictions: Training scalable hierarchical world models that operate at multiple timescales is an open challenge in multiple fields including compressed video prediction, reinforcement learning, control theory, etc. The paper presents one way to interleave between different levels conditioned on a task descriptor. The low-level learns the dynamics conditioned on a particular task and the higher-level is trained to predict the next task.
- Efficient Inference Scheme via factorised formulation: Following Becker, et al, 2019, and Volpp, et al 2020, the paper proposes multi-scale inference via closed-form solutions using simplifying locally linear assumptions.
- I like the way the two levels are connected: through the specification of the prior belief $p(l_k| B_{1:k-1}, \alpha_{1:k-1})$ that defines the $p(l_k)$ in the fast-time scale.
- The use of a probabilistic formalism allows the model to handle uncertainty in predictions, particularly in prediction of changes to change, which is a common challenge across continual learning task settings.
- Robust Performance: The model has been shown to outperform recent methods on several system benchmarks for long horizon predictions as noted in the figure 3.
- Captures Complex Dynamics: The model can better capture the complex, non-linear dynamics of a system in a more efficient and robust way than models that learn on a single time scale (Fig. 5)
Weaknesses: - In the formulation, the slow time scale SSM is only updated every step, i.e., the slow time $H$ scale time step is given by $H \triangle t$. Therefore, the results are contingent on a design choice. I am cognizant that the this was evaluated for the range $0.2 – 0.5 $. However, how dramatically could results / predictions change if the wrong discretization step was chosen. Is there a way to systematically infer this through the system?
- The results have presented do not show how well the prediction settings vary across different tasks.
- The results haven’t shown how well large deviances in the dynamics would be encoded? Example some large spike.
- Higher level encoded space is based on the observation space. Therefore, the higher-level is inherently conditioned on the first level. So, how would the scaling up work for multiple levels? Would $o \to B$ change to $o \to z \to B$?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Conceptual:
- Both abstractions interrelate with each other in the sense that the higher-level predictions can be turned into low-level moment-by-moment predictions? How closely aligned where the encoded spaces across the levels for k-1, k when considering transition from t at k-1 to the next task?
- It is stated that it is trivial to extend to arbitrary number of temporal abstractions – would it be possible how this would work when there aren’t explicit factors like task that can determine the timestep H?
- How similar do the tasks, $l$ have to be for the results to hold? Change in environment dynamics? It was not clear to me how well the dynamics are encoded as the model transitions predictions from l to l+n, etc.
- Is there a reason why the observation abstraction was chosen over the state space?
Formulation, results and presentation:
- How is the second half of the latent state $d_t$ defined as the derivative that the model uses to the estimate the change of the observable part? Would this work for image-based observations formulations of the same model.
- The posterior calculation $P(z_t|o,a)$ involves a factorised over the covariance matrix. Where are $s, l$ defined?
- In eq1. How are Q and R parameterised; are these the $diag \sigma_{z;o}?
- What happens when H is manipulated?
- Can tasks from different environments be learnt? Or would this be non-trivial?
- What does the temporal encoding for both encoding of abstracted observation and action look like? Is this the number of steps taken or latent encoding of it?
- Would it be possible to clarify what $m_t$ action entails; is this the derivative or something else?
- The prior belief for the first time step of time window is initialised using the posterior belief of the last time step of time window; does this work for instances there is a massive shift in dynamics?
Minor:
- Is there a reason why some equations are numbered and others not?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have appropriately addressed the limitations of MTS3, specifically regarding the consideration of only two timescales and the model's exclusive evaluation for predictions. Typically, it is customary to validate methods in a controlled setting when assessing the performance of the world model. However, I appreciate the learning the policy from such formulations is tricky and requires further consideration.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and valuable suggestions. We write our replies to the weaknesses/questions listed by the reviewer below.
**(Weakness) "However, how dramatically could results ... wrong discretization step chosen. Is there a way to systematically infer this through the system?"**
Thank you for your insightful comment. The discretization step is an important hyperparameter. We chose $0.2s \leq H \Delta t \leq 0.5s$, based on related works in meta-learning[3], HiP-MDP / HiP-SSM [4], where for most physical systems like robots, dynamics can be assumed to be fixed for such a short duration.
However, we further performed an ablation and some intuitive prediction plots for different $H$ on the mobile robot and hydraulic excavator dataset. The results of this can be found in **the rebuttal pdf**. We mention a rule of thumb on how to choose $H$ in different levels in the next section.
**(Question) It is stated that it is trivial to extend to an arbitrary number of temporal abstractions – would it be possible how this would work when there aren’t explicit factors like tasks that can determine the timestep $H$?**
We had to limit our definition to 2 timescales because of space constraints. But a definition and details of a generic N-level MTS3 is defined in the "Global Comment" section. We also give a general rule of thumb on how the discretization step can be chosen (without hyperparameter search) there. We kindly request the reviewer to read the details in the global comments section.
**(Weakness) Higher level encoded space is based on the observation space... how would the scaling up work for multiple levels? Would it be o -> B or o -> z -> B ?**
**(Question) Is there a reason why the observation abstraction was chosen over the state space?**
We hope the general definition of MTS3 and the computational/implementation details answer the question on how scaling up would work. We chose this approach to maintain a Feudal [5] (Top to Bottom) hierarchy for the latent variables. Ideally, the model should first make the easier higher-level task predictions (more abstract information without worrying about too many details as in lower levels), independent of the lower-level tasks and use the higher-level predictions to condition predictions on lower-level windows. Using observation abstractions allows for this feudal causal relationship.
**(Question) How is the second half of the latent state $d_t$... Would this work for image-based observations .. ?**
Choosing the observation model as $H=[I,0]$ allows for this division of the latent state vector into two distinct components. The upper part utilizes the identity matrix $I$ in $H$ to directly extract information from the observations. Meanwhile, the second lower part remains unobservable and is meant to hold information inferred over time, such as velocities in ordinary dynamical systems or images. The key aspect that contributes to the effectiveness of this choice is the selection of the covariance matrix structure (as mentioned in line 65 of the paper). The covariance matrix is designed to incorporate both diagonal and off-diagonal elements, ensuring that the correlation between the memory and the observation parts is effectively learned in the off-diagonal part. On the contrary, if we were to utilize a pure diagonal covariance structure, it would not update the memory units (the later half) or their variance adequately during the Observation/Task/Kalman Update step.
Yes, this assumption can effectively deal with image-based observations too as demonstrated by [6].
**(Question) How are Q and R parameterized?**
We apologize for not making this clear in the paper. The transition noise $Q$ is assumed to be diagonal. It is learned and is independent of the state following [6]. The observation noise $R$ is again diagonal and is the output of the observation encoder, which is used in the Kalman Update / Bayesian Conditioning stage. This would be clarified and made more evident in the final version.
**(Question) What happens when $H$ is manipulated?**
Answered above.
**(Question) Can tasks from different environments ... be non-trivial?**
The challenge here would be dealing with varying dimensional observation and action space as the agent/environment changes. If this challenge can be addressed, MTS3 should scale to different environments in our opinion by learning environment-specific temporal abstractions for adaptation.
**(Question) What does the temporal encoding ...?**
It is the normalized value of the absolute number of steps taken in a particular window (the absolute number of steps in a window is always $1 \leq t \leq H$). We didn't use any encoding.
**(Question) Would it be possible to clarify what $m_t$ action entails; is this the derivative or something else?**
Could the reviewer clarify which notation in the manuscript they are referring to? We would be happy to clarify this.
**(Weakness / Question): How well large/massive shift in dynamics would be encoded? For example some large spike.**
We understand that since we have Gaussian assumptions, it's natural to pose this question about discrete changes. We can't learn/predict changes in dynamics caused by external rare events that are not predictable, like a spike (up and down suddenly) due to a human pushing a robot for a brief duration unexpectedly. However, we do think it can handle dynamics changes in the form of step functions (e.g., the load of the robot changes at set durations etc) as long as these changes have a pattern. The Franka Kitchen environment and the Panda real robot encounter such scenarios in our experiments.
We hope to have addressed most of the questions that the reviewer pointed out here and in global comments. We thank the reviewer again for their insights and remain hopeful that they would consider increasing the score.
Please refer to the global comment section for references.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I really appreciate it. I will maintain my positive opinion of the paper.
---
Reply to Comment 1.1.1:
Comment: Many thanks for your reply and positive feedback!
---
Rebuttal 2:
Comment: Dear Reviewer,
The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. | Summary: The authors introduce the Multi Time Scale State Space (MTS3) model in this work. The model uses closed-form equations derived using exact inference, spread across two time-scales, to produce long-horizon predictions and uncertainty estimates. They demonstrate the superiority/competitiveness of their inference approach across a number of offline datasets, both in terms of long-term deterministic predictions, and long-term uncertainty quantification.
Strengths: - To the best of my knowledge, this closed-form multi time-scale inference approach using SSMs is novel, and has not be explored in previous works. The produced inference model is a principled (but non-trivial) integration of pre-existing components, and the results demonstrate the promise of such an approach
- Generally, the work is of high quality. The writing is clear, and the paper is well-organized. The related work section is brief, but appears to adequately address prior work relevant to the aims of this paper.
Weaknesses: - In the Figure 3 ablation plots, MTS3 should remain as Red in (c) to improve clarity. It would also be much clearer if the ablations were displayed in their own figure. It also does not make much sense, beyond organization of the plots onto separate lines, why Figure 3a and 3b are separated, as they are displaying the same findings across different environments. The a/b grouping contains no semantic difference. I understand the need for conserving space, but the way the figures are grouped and color-coded, it is not clear at a first glance what the relationship between a, b, and c are.
- Spacing between Table 1 and the caption is too tight.
- Figure 1 should be raised so that the caption does not overlap with 3.1 (try to align with line 79 paragraph).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - How were the hyperparameters for MTS3 tuned? How were the baselines tuned?
- Transformers are achieving impressive results in control as of late. Where might they be integrated into this work in the future, to improve scalability, generalization, etc.?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - The modeling assumptions constitute most of the limitations, but these are also what enable the convenient closed-form updates.
- The authors note that the their model is limited to two levels of temporal abstraction, and that for certain tasks (e.g. Maze2D) more hierarchies may help. The method allows for addition of more complex abstractions such as Feudal hierarchies.
- The authors are restricting their application to action conditional long-horizon future predictions. Although they note that future work can use these predictions for hierarchical control, they leave this for future work—it would have been nice to demonstrate this possibility with real control results.
- Their method relies on reconstruction loss, which may have limited direct application to image-based domains. However, as the authors note, non-reconstruction based losses can be integrated. Again, it would have been nice to see this integration of different losses directly.
- Overall, while I believe the authors have extensively noted the major limitations, especially for listed limitations (ii) and (iii), it would have been nice to see results in this paper to demonstrate that these limitations can indeed be relieved with simple substitution/introduction of new components.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for looking at our submission in a positive light. We would like to address a few questions raised by the reviewer below:
**Question: How were the hyperparameters for MTS3 tuned? How were the baselines tuned?**
All hyperparameters for MTS3 and baselines including Transformers were tuned using grid search. For all recurrent models (MTS3, HiP-RSSM, RKN, LSTM, and GRU), we use a similar encoder-decoder architecture across datasets to ensure a fair comparison but allow the latent state dimensions to vary based on hyperparameter search. Small variations from these encoder-decoder architecture hyperparameters can still lead to similar prediction performance as reported in the paper.
**Question: Transformers are achieving impressive results in control as of late. Where might they be integrated into this work in the future, to improve scalability, generalization, etc.?**
We do not see a way as of now to integrate transformers into the "exact inference" scheme for MTS3. However, a variational formulation of MTS3 (on the same graphical model that we propose) would allow for integrating transformers, especially to replace the set encoders used for Bayesian aggregation with transformer-style (self-attention-based) task encoders. Here the task posterior would be parameterized with transformers operating on sets of observations. Deriving an ELBO for variational approximations on the MTS3 graphical model is straightforward and would result in mathematically convenient decomposable objectives and can be a future research direction.
**Weakness: Confusing figure placement and formatting issues**
Thank you for the suggestions. These would be duly addressed in the final version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for adequately addressing my concerns. I maintain my prior opinion of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and positive outlook!! | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable suggestions / insightful questions and comments. We would like to post answers here to some common questions/weaknesses raised by multiple reviewers.
**1. Questions on whether the strong experimental results are out of learning at multiple time scales or some training strategies like imputation**
We hope the extensive ablations that we report in the **rebuttal pdf**, with different values of the discretization step (which changes the magnitude of time scale) show that it's in fact learning temporally abstract hierarchical world models that result in improved performance. We also compare with a "Flat" single time scale MTS3. Note that all ablations/baselines were, trained with the best possible imputation schemes to ensure a fair comparison. Moreover, we provide intuitive prediction plots that explain to some degree how MTS3 behaves when the discretization step $H$ is changed.
**2. A general definition and implementation sketch for MTS3 with arbitrary hierarchies. How to choose H.**
**Definition:** An $N$-level MTS3 is defined as a family of $N$-state space models, $\{S_0, S_1, ..., S_{N-1}\}$. Each of the state space models $S_i$ is given by $S_i = (Z_i, A_i, O_i, f_i, h_i, H_i \Delta t, L_i)$, where $Z_i$ is the state space, $A_i$ the action space, and $O_i$ the observation space of the SSM. The parameter $H_i \Delta t$ denotes the discretization time-step and $f_i$ and $h_i$ the dynamics and observation models, respectively. Here, $l_i \in L_i$ is a task descriptor that parametrizes the dynamics model of the SSM and is held constant for a local window of $H_{i+1}$ steps. $l_i$ is a function of the latent state of SSM one level above it, i.e., $S_{i+1}$. The boundary cases can be defined as follows: for $i=0$, $H_0 = 1$. Similarly, for $i=N-1$, the latent task descriptor $L_i$ is an empty set. For all $i$, $H_i < H_{i+1}$.
**Choosing Discretization Step:** Though we recommend searching for $H_i$ as a hyperparameter, as a general rule of thumb, it can be chosen as $H_i = (\sqrt[N]{T})^i$, where $T$ is the maximum prediction horizon required / episode length. This ensures that very long recurrences are divided between smaller equal-length task-reconfigurable local SSM windows (of length $\sqrt[N]{T}$) spread across several hierarchies.
**Computation and Implementation:** From a computational standpoint, the higher-level temporal abstractions are inferred by the aggregation of $H_i$ observations/actions at each level $i$. However, since we derive this aggregation as a permutation-invariant set operation, this can be efficiently parallelized. The aggregation rules derived can be thought of as simple "probabilistic attention," where attention weights are given by the learned variances in the set encoder and has a linear computational complexity of $O(H_i)$, when similar deep-set operations (self-attention) in transformers have a complexity of $O(H_i^2)$.
**3. The advantage over transformers / Can transformers be integrated into MTS3 formalism?**
The goal of this paper is to come up with a formal probabilistic framework to learn world models leveraging hierarchical temporal abstractions. This also lays a foundation for designing more principled hierarchical planning methods, that can leverage control as an inference framework. Though we outperform Transformers on several benchmarks, we think Transformers (or any stateful RNN) that operate at multiple timescales taking inspiration from this formalism can be a promising alternative research direction as pointed out in line 260 in the main paper. But coming up with such an architecture in a non-ad-hoc way is not trivial in our opinion.
There are also debates within the community as to whether LLMs/ Transformers capture causal relationships [1] and can autoregressive models like GPTs be effective for planning and reasoning [2].
We think our model combines the benefit of both worlds, set-based processing of temporal information as in Transformers via principled
aggregation schemes (a sort of probabilistic attention), yet maintaining the sequential dependency of RNNs which enforces causality by design. They also use a fraction of parameters as Transformer baselines and are more computationally efficient during inference as discussed in the previous section.
**Additional Points**
We are attaching a [zip file](https://drive.google.com/file/d/103fbLz1ahrfbSCGN0Ko58A7v76EVLPlu/) to our unofficial code. We once again thank all the reviewers for their insightful comments and questions. We hope we were able to answer most of the questions raised by the reviewers. Please let us know if you need more clarification.
**Reference**
[1] https://www.cclear.cc/2023/CLeaR23_roundtable_discussion.pdf
[2] LeCun, Yann. "A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27." Open Review 62 (2022).
[3] Nagabandi et al. "Learning to adapt in dynamic, real-world environments through meta-reinforcement learning." ICLR (2018).
[4] Shaj et al. "Hidden parameter recurrent state space models for changing dynamics scenarios." ICLR (2022).
[5] Dayan and Hinton. "Feudal reinforcement learning." NIPS (1992).
[6] Becker et al. "Recurrent Kalman networks: Factorized inference in high-dimensional deep feature spaces." ICML, 2019.
[7] Shaj et al. "Action-conditional recurrent Kalman networks for forward and inverse dynamics learning." CoRL, 2021.
[8] Wilson and Izmailov. "Bayesian deep learning and a probabilistic perspective of generalization." Neurips (2020):
[9] Volpp, Michael, et al. "Bayesian context aggregation for neural processes." ICLR. 2020.
[10] Singh, Gautam, et al. "Sequential neural processes." Neurips (2019).
[11] Lusch et al. "Deep learning for universal linear embeddings of nonlinear dynamics." Nature Communications (2018): 4950.
[12] Weissenbacher et al. "Koopman q-learning: Offline reinforcement learning via symmetries of dynamics." ICML, 2022.
Pdf: /pdf/f3a3da2e7a480fd316c4ce5a451fd2a83521c60f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Sample based Explanations via Generalized Representers | Accept (poster) | Summary: The paper proposes a unifying framework for sample-based explanation methods via generalised representers. The framework proposes to approximate a general nonlinear predictive function using a surrogate from the Reproducing Kernel Hilbert Space (RKHS) (Surrogate function $f(x)=\sum_{i=1}\phi(x_i,x)=\sum_{i=1}\alpha k(x_i,x)$). The underlying explanation function $\phi$ is both necessary and sufficient to satisfy a given set of axioms (aka efficiency, continuity, self-explanation, symmetric zero, symmetric cycle, irreducibility axioms) and can be decomposed in two parts, namely a global component, viz. the alpha coefficients (constant over the whole input space), and a local one, viz. the kernel similarity between the evaluating point and each training instance (which clearly depends on the input). Existing sample-based explanation frameworks, including TracInCP, influence functions and representer points, are shown to be particular instantiations of the proposed framework depending on the choice of the alpha coefficients and the kernel function. Additionally, the framework suggests a new sample-based explanation strategy to reduce the computational storage burden of TracInCP. Experiments are conducted over a CNN model on CIFAR10 and MNIST, thus empirically highlighting the benefits of different choices of alpha coefficients and kernel functions.
Strengths:
1. The idea of unifying existing sample-based explanation approaches is elegant, original and novel ( **Originality** )
2. Overall, the paper is clear and self-contained ( **Clarity** ). Perhaps, the presentation can be improved by taking into account some of the suggestions highlighted in the Questions section.
Weaknesses: 1. Code is not available. It would be good to release the code for replicating the analysis ( **Reproducibility** ).
2. While the theory unifying different sample-based explanation approaches is nice, the observations that can be drawn from it are rather limited to the analysis on the choice of different alpha/kernel functions ( **Significance** ).
3. The experimental analysis uses rather small and simple neural networks and represents a simple proof of concept (**Quality**). Since the authors compare against TracInCP, they should consider using the same experimental settings of the main paper (namely using ResNet architecture).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, I liked the overall formalisation of sample-based explanation techniques. Can you please elaborate more on the following two aspects?
1. Additional considerations/insights that can be drawn from the analysis.
2. Scope of validity of experimental results. Are the observations valid for instance on other and larger network architectures than simple CNNs?
Please, find below a list of possible points to improve the presentation:
1. Is Method 2 in Section 4.2 necessary for the presentation?
2. Line 206 should $\nabla_{\theta}$ be replaced with $\nabla_{f_\theta}$?
3. Line 222 replace the correct reference and Line 224 remove “show”
4. Would it better to summarise sample-based explanation strategies in the form of a table, thus highlight the different choices of alpha and kernel functions and the difference with the proposed strategy?
5. Can you please highlight more the difference between TracInCP and your strategy?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No additional suggestions to improve the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the review, and we truly value the time you dedicated to reading our paper. Our point-to-point responses to your comments are given below.
**Code release**: We are working on reorganizing the code, and plan to release it when the paper is published.
**Additional insights from analysis**: The analysis gives us insights of how these axioms guarantee the explanations to take on the unique form. The symmetric cycle axiom ensures that the explanation should have the form consisting of a global importance and a kernel similarity term.
The self-explanation axiom and the symmetric zero axiom address scenarios where explanations may become zero. Specifically, the self-explanation axiom handles the case where a training sample has no impact on the model and ensures that the global importance should be zero in this case. On the other hand, the symmetric zero axiom emphasizes that the similarity measure must remain symmetric even when one of the explanations is zero.
Lastly, the irreducibility axiom ensures the kernel function be a Mercer kernel by ensuring the kernel is positive semidefinite.
**Scope of Experiments**: Our experimental setting requires retraining models to be explained for over 10,000 times, so we decided to perform experiments on smaller models due to limited computational resources on both image and language data (see appendix).
For larger models, we expect similar trends, which can be partially supported in other studies. Particularly, Trak [38] employs influence kernels and empirically demonstrates that it outperforms existing methods by a large margin for large models like ResNet-50 and Transformer. The observation aligns with our finding when comparing different kernels. Moreover, experiments in Yeh et al. [53] show that information in early layers is more important when computing data influences for transformers. It is also observed in our language data experiments on smaller models since the NTK-final outperforms last layer embeddings in Table 2 in the appendix.
**Comparison to TracInCP**: Our proposed tracking representers share similar intuition with TracInCP as they both tracking gradient descent trajectories. However, a crucial difference is that TracInCP employs a changing NTK kernel as it accesses different checkpoints in the training process. While it may leads to more accurate approximation, TracInCP requires storing training checkpoints and performing inference on each of them, resulting in an increased computational and storage cost proportional to the number of checkpoints.
On the contrary, the tracking representer only requires NTK from the final model. It accumulates derivative information using the global importance, as shown in definition 11, which significantly improves efficiency. Remarkably, our experiments demonstrate that this approach achieves comparable performance to TracInCP while being more computationally efficient.
**Suggestions for improving presentations**: Thank you for the suggestions. We have corrected the typos in our draft. A table listing choices of global importance and kernel functions of existing sample-based approaches and proposed algorithms is also added. Also, the target derivative approach is important to the paper since it is the global importance used by existing approaches like the influence function and TracInCP (when learning rate is constant).
---
Rebuttal Comment 1.1:
Title: Thanks for Answers; Additional Clarifications
Comment: Thank you for the answers, which address most of my points/questions. I went also through the other reviews and the remaining concerns can be summarized as follows:
1. Scope of experiments and experimental methodology
2. Motivation about axioms and similarities/differences compared to Data Shapley (as suggested by reviewer SFDZ)
3. Valorizing novelty of proposed unified framework
Regarding point 1, experiments on language data are a nice addition and therefore appreciated. However, the major concern about the experimental methodology and comparison with TracInCP is not well addressed yet. While I understand and emphatize the authors for the lack of computational resources required by running larger scale experiments, I think that the analysis I'm hoping to see is still feasible and would strengthen the validity and scope of the results. Indeed, the comparison with TracInCP (for instance using ResNet architecture) can be still conducted by choosing an off-the-shelf pre-trained model without incurring in large computation. Indeed, in the experiments, you almost always leverage the last checkpoint and the analysis is mostly focussed on the computation of the coefficients with fixed feature map. Can you please elaborate more on this aspect?
Regarding point 2, I agree and therefore support author's answers. Indeed, different axioms lead to different forms of explanation functions. The axioms provided in this paper aim at unifying previous sample-based explanations and establish a connection with kernel functions. However, I'm curious to see what is the opinion of reviewer SFDZ.
Regarding point 3, it is still unclear to me what are the novel insights that can be drawn from the unifying framework and what are possible directions for future research. In the hope of valorizing your nice analysis, could you please comment on this aspect? Perhaps the following table might seed some thought on addressing this question...
| Choice of kernel / Choice of coefficients | Method 1 (solving regression) | Method 2 (no regression, replacement of surrogate) | Method 3 (solving regression and tracking gradients) |
|---|---|---|---|
| Kernel 1 (neural embeddings Eq. 9) | Representer Point Selection | ? | ? |
| Kernel 2 (neural tangent kernel Eq. 11) | ? | ? | TracInCP |
| Kernel 3 (influence functions Eq. 13) | ? | Influence functions | ? |
---
Reply to Comment 1.1.1:
Title: Thanks for bringing up the overlooked issues
Comment: We appreciate you bringing up the overlooked issues.
**Scope of Experiments**: For comparison with TracInCP[6] on larger models, we follow the experimental setting in [6]. We train a Resnet-50 on the CIFAR-10 dataset and randomly flip 40% of labels (uniformly to the other 9 classes). We compute the self-influence (sample influence to itself) of each training sample using these approaches. We expect the mislabeled data to have higher self-influence and more accurate estimation can identify more influential mislabeled training data.
For tracking representers, we use the $|\alpha_{ij}|$ to measure self-influence of the $i^{th}$ sample with label $j$[6,7]. For TracInCP, we leverage 8 checkpoints from beginning to the end of training. We use retraining accuracy as our evaluation metrics. The experimental results are as below.
| Fraction of training samples checked | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 |
| ----------- | ----------- |----------- |----------- |----------- |----------- |
| TracInCP | 63.24 | 67.51 | 70.34 | 72.29 | 75.32|
| Tracking rep. | 68.15 | 72.05 | 74.40 | 75.84 | 75.78|
We can see that tracking representers clearly outperform the TracInCP approach from the above results. We think the main advantage of tracking representers are that they track the magnitude of training gradients instead of using checkpoints as an estimation like TracInCP. This underscores a limitation of employing dynamic kernels during training, as it necessitates the storage of checkpoints for computing various kernels. On the contrary, the kernel is fixed in the tracking representers. It allows us to accumulate training gradient information using the vector of global importance during training and eliminate the need of storing large checkpoints.
Due to limited time and computational resources, we can only compare TracInCP with tracking representers, we will include more results of different generalized representers in the next version of the paper.
**Possible directions for future research**: One of our contributions involves offering a novel perspective on existing sample-based explanations. We propose that these explanations can be seen as approximations of a weighted combination of the kernel machines that can be solved via a RKHS regression problem. This perspective allows us to have a more general framework.
As a result, a potentially valuable avenue for future research could entail the selection of appropriate generalized representers tailored to specific downstream tasks. To illustrate, for large-scale models, the computation of the Neural Tangent Kernel (NTK) might prove computationally intensive. Here, a randomized projection of the NTK might suffice, given that random projection holds the capacity to uphold pairwise distances with high probabilities.
The remaining six entries on the table are all novel generalized representers. It would be interesting to design different generalized representers in different scenarios. | Summary: In this study, the authors conducted an axiomatic analysis of a measure that quantifies the influence of a given training data on predictions.
Under several axioms, the authors demonstrated that an effective measure of influence is limited to the form of a suitable coefficient multiplied by a continuous and positive definite kernel function.
Based on this finding, the authors showed that many existing influence metrics can actually be expressed in the form of a suitable coefficient multiplied by a kernel function.
Furthermore, the authors proposed a new measure by combining Representer Point Selection and Neural Tangent Kernel.
Strengths: The strength of this study lies in the axiomatic analysis of the measure of data influence.
Under Continuity Axiom, Self-Explanation Axiom, Symmetric Zero Axiom, Symmetric Cycle Axiom, and Irreducibility Axiom, the authors demonstrated that an effective measure of influence is limited to the form of a suitable coefficient multiplied by a continuous and positive definite kernel function.
Furthermore, based on this finding, the authors showed that many existing metrics for measuring influence can indeed be expressed in the form of a suitable coefficient multiplied by a kernel function.
The reorganization of these existing influence metrics from an axiomatic perspective represents a novel and significant contribution of this study.
Weaknesses: An essential weakness of this study is the insufficient discussion regarding the validity of various axioms.
While Continuity Axiom appears to naturally require the continuity of the measure, the validity of the other axioms, Self-Explanation Axiom, Symmetric Zero Axiom, Symmetric Cycle Axiom, and Irreducibility Axiom, is not necessarily evident from the current discussions in the paper.
In fact, Data Shapley [8] employs different axioms.
Since the choice of axioms determines the appropriate measure, the discussion of the validity of these axioms becomes crucial in the axiomatic analysis.
While the authors provide some intuitive explanations, they seem insufficient as a discussion on the validity of these axioms.
For example, what are the similarities and differences between the axioms employed in Data Shapley [8] and the axioms considered in this study?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Please discuss the validity of the axioms introduced in the paper. When they are appropriate and when they may be not?
* What are the similarities and differences between the axioms employed in Data Shapley [8] and the axioms considered in this study?
---
I have read the authors' rebuttal.
The difference of the current study and Data Shapley [8] is partly solved.
I strongly believe it should be discussed in detail in the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors mentioned some possible future directions that are not addressed in the current study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. We sincerely appreciate your time in reading the paper and we are grateful for your feedback! Our responses are given below.
**Validity of axioms**: The axioms of the generalized representers encompass both practical and mathematical implications for what an explanation should look like in ideal scenarios. Some axioms focus on the practical aspects of explanations when certain conditions are met, while others maintain important mathematical relationships.
For instance, the self-explanation and symmetric-zero axioms address scenarios where a sample has no impact on another sample. The self-explanation axiom emphasizes that when a sample has no influence, not even on itself, it is likely to have minimal impact on the model and should consequently have little or no influence on other samples as well. This highlights the practical aspect of explanations, indicating that non-influential samples should not significantly affect model outputs.
The symmetric-zero axiom underscores the bidirectional nature of “orthogonality“. It emphasizes that if a sample has no impact on another sample, this lack of correlation is mutual and implies that they are orthogonal. This axiom becomes particularly reasonable in cases where ML models treat all training samples equally. In such scenarios, the symmetric-zero axiom suggests that “orthogonal” features have no impact on each other.
On the other hand, the continuity and irreducibility axiom primarily serves a function-analytic purpose by providing sufficient and necessary conditions of a kernel being a Mercer kernel, which requires that the kernel function be continuous and positive semi-definite. We note that without the two axioms, the theorem still holds in both directions but the kernel function only needs to be symmetric, which is the minimal condition of being a kernel function. Since Mercer kernels have proven to be practically useful and successful in ML (including for algorithmic reasons that explicitly leverage these properties), we decide to preserve the two axioms to make the theorem more grounded to practical usage.
**Comparison to axioms of Data Shapley**: Data Shapley and generalized representers have distinct purposes and interpretations. Data Shapley is used to quantify the importance of individual training samples. In contrast, generalized representers assess the significance of training samples with respect to a specific test sample's prediction. Consequently, the axiomatic properties of these two approaches differ.
For instance, both Data Shapley and generalized representers adhere to a similar axiom, the self-explanation axiom or the dummy axiom, which dictates that explanations should be zero if a training sample has no impact on the model. However, they require a different theoretical treatments due to the additional focus in generalized representers of explaining a model prediction on a particular test sample (we note that this is also the setting for popular sample explanations such as influence functions, and TracIn). The additional facet of a test input introduces an extra degree of freedom, requiring another axiom to ensure uniqueness of the generalized representers. To this end, the symmetric-zero axiom further expands the notion of the dummy axiom by considering scenarios where one sample has no impact on another sample's prediction. Thus to summarize, the distinction between Data Shapley and Generalized Representers arises from the different contexts and purposes of these two explanation techniques. We thank the reviewer for this important question, and will be sure to add this discussion to the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I would like to thank the authros for adding detailed discussions.
The relationship between Data Shapley and Generalized Representer will be an interesting and important topic.
In the response, "additional focus in generalized representers of explaining a model prediction on a particular test sample" is the key point making these two studies different.
However, if I understand correctly, we can use Data Shapley for the same purpose by choosing the performance score function $V(\theta)$ as the predictted output on a particular test sample.
If this is the case, how the axioms of Data Shapley and the axioms of Generalized Representers differ in this particular setting?
---
Reply to Comment 1.1.1:
Title: Thanks for your reply.
Comment: Thanks for your reply. We understand that data Shapley can measure training influence for a test sample when the value function $V$ takes the loss of the test sample. However, this similarity is due to the particular choice of the value function. For theoretical development, the settings are different since our generalized representers take two inputs while the data Shapley only takes one. Also, the origins of the two axiomatic frameworks are also different, data Shapley originates from the Shapley value and ours come from kernel theories. These differences lead to different theoretical formulations. We also note that practically, the value function $V$ cannot take the loss of a particular test sample since test samples are generally not given during training.
As pointed out in the rebuttal, our axiomatic framework does share some similarities with the data Shapley. For example, the efficiency axioms are the same. Also, the dummy axiom corresponds to symmetric-zero and self-explanation axioms.
The biggest difference is that the symmetry axiom is not included in our axiomatic framework as our framework allows the same training sample having different importance. We think it is reasonable since the same input features may have different labels due to label errors. The generalized representers then capture this information in the global importance $\alpha$ and allow the input feature similarities to be captured by the kernel product. This is why the symmetric cycle axiom has a different formulation then the symmetry axiom of the Shapley value. | Summary: This paper studies a new framework for generating sample-based explanations for black-box machine learning algorithms. To explain a black-box model, the basic idea of sample-based explanations is to quantify how each training data is influencing the prediction of certain test data. The main contribution of this work is to give a natural set of axioms for defining explanation functional, and showed the equivalence of every explanation function that satisfies these axioms with Mercer kernels. Under this framework, the paper studies how to define the importance function based on given kernel functions and discusses some popular choices of kernel functions in the context of deep learning.
Strengths: This paper studies a very interesting and important question within the realm of interpretable (explainable) machine learning, making a very neat connection between a natural set of axioms for the explanation functional and kernel functions. The study is substantial and showcases promising results from numerical studies.
Weaknesses: The authors may benefit from expanding their discussion on the selection of kernels. For instance, the Inf-final kernel outperforms all other kernels in Table 1 but lags significantly in Table 2, according to the supplementary material. Given this inconsistency, it seems slightly misleading to simply promote the influential kernel in the main body of the paper without a more thorough discussion.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The paper would also be strengthened by additional discourse on the axioms, which are a key element of the study. How do existing sample-based representers fare against these axioms? Furthermore, is the axiom set minimal in its essence? For example, is it possible that the continuity axiom may not hold in some practical situation? The kernel function doesn't have to be continuous without the continuity axiom.
Although the authors present the tracking representer as "a more scalable variant" due to its reduced computational burden, it consistently outperforms the other two approaches. Could this be partially attributed to early stopping? In addition, the target derivative is proposed as an approximation to avoid solving the Reproducing Kernel Hilbert Space (RKHS) regression, but it also consistently outperforms the original surrogate derivative. It would be beneficial to have further commentary on these points.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your encouraging words and constructive comments. Your questions are answered below.
**Seemingly Inconsistent experimental results of Influence function kernel**: When dealing with language data, we calculate the influence function kernel using the last-layer embeddings [5,53]. This choice is made because the word embedding layer contains a substantial number of parameters, making it computationally impractical to compute the exact influence function. However, it has been suggested in [53] that the embedding layer is essential for computing sample influences. We think that the inf-final kernel may not perform as well as the NTK-final kernel due to the absence of the embedding layer's information.
We plan to merge the two experimental results to the main text and our conclusion would be that the inf-final kernel performs the best whenever all parameters are considered. Otherwise, NTK-final would be the best choice.
**Additional discourse on axioms**: Most gradient-based sample-based explanations [24], including representer point selections, influence functions, and TracIn (when the corresponding kernel is fixed), satisfy the proposed axioms since they can be represented as generalized representers. On the other hand, retraining-based sample-based explanations [8, 25, 27] compute model prediction changes after removing one or more training samples and retraining the model. These approaches can not be computed using one single model and can not be represented as generalized representers.
The current set of axioms are minimal since the theorem demonstrates the set of axioms imply the explanations to be generalized representers and it also holds conversely. Additionally, the continuity axiom is crucial to ensure that the kernel qualifies as a Mercer kernel. As most kernels used in the current machine learning literature are Mercer kernels, continuity becomes an essential aspect of our analysis. However, when dealing with explanations for non-continuous models, it is plausible that the corresponding kernel may not be continuous. Extending our axiomatic framework to accommodate such scenarios is an interesting subject for future work.
**Tracking representer**: As indicated by TracIn [6], accessing early checkpoints can be advantageous in identifying influential samples, given that neural networks tend to memorize and overfit to training data. Therefore, the improved performance of tracking representers on the deletion curve metric may be partially attributed to their capacity to retain valuable loss information throughout the training process.
Also, since the target derivative more accurately reflects the sensitivity of model to the training loss, the target derivative may perform better on the deletion curve metric. | Summary: This paper studies and proposes a set of desirable axioms for sample based explanations. This further demonstrates that the only solution satisfying the set of desirable axioms has the form of $\alpha_i K(x_i,x)$ (i.e., the product of two components: a global sample importance, and a local sample importance that is a kernel similarity between a training sample and the test point) where $K(.,.)$ is a psd kernel function. Moreover, Many existing sample-based explanation methods such as representer point selection [7], influence functions [5], and TracIn [6] can be viewed as specific instances of the broad class of generalized representers.
Strengths: The set of desirable axioms makes sense and the link between kernel theory and sample based explanations through set of desirable axioms is interesting.
The viewpoints to connect the sample based explanations through the set of desirable axioms and the existing sample-based explanation methods is interesting.
Weaknesses: Training and learning a kernel model to approximate a given black-box function are challenging and computationally expensive.
Method 2: approximation using the target function is not convincing to me because it has a loose connection to the kernel theory and the local sample importance is unclear in this method.
Method 3 seems to be most practical, but it requires the feature map $\Phi$ is fixed.
Regarding, Kernel 1: Penultimate-layer Embeddings, it is still unclear to me how to store $\alpha$ and compute the score $\alpha_i K(x_i,x)$ if the feature map $\Phi_{\theta_1}$ is updated all the times and we have multiple kernels along with the training progress.
For Kernel 2 and 3, the proposed approach serves as a tool to explain [5] and [6] rather than introduce a new approach.
The experiments conduct only for binary classification with limited data although the conclusion is interesting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address my questions in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not address the limitations and potential negative societal impact of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper! We are grateful for your feedback. Please see our responses below.
**Regarding whether we propose new approaches or explain existing approaches**: One of our key goals was to provide an axiomatic framework for a large class of sample based explanations. Thus, the fact that two of the most practically successful sample based explanations (influence functions and TracIn) could be viewed as direct or slightly extended instantiations of our framework is an important indication of the generality of the framework, as well as an indirect validation of the relevance of the corresponding set of axioms specifying the framework. Nonetheless, note that our framework is much more general than influence functions or TracIn, and allow for a suite of other approaches; for instance, with a domain-knowledge specified kernel.
**Regarding computing global importance $\alpha$ with changing kernels**: Dealing with a changing kernel for general non-linear models is still a open problem; what people in the NTK field do now is to extract a kernel representation of the model on a particular checkpoint, i.e. a initialized model or a pretrained model, and use the corresponding kernel machine to approximate the original model [54,56]. In this work, we follow this approach [54,56] in the NTK literature and also compare empirical performance of different choices of kernels and different model checkpoints in the experiment section.
Furthermore, from our experimental findings, it appears that employing multiple kernels might not lead to improvements in the deletion curve metric. For instance, in the case of TracInCP[6], which utilizes NTK from multiple checkpoints, we observed that although this approach does not yield significant benefits, it substantially escalates computational and storage cost. Consequently, we advocate the use of a fixed kernel from the final model as it suffices for sample-based explanations, avoiding unnecessary complexity and resource burden.
**Experiments**: We provide another experiment of CNN on language data in the Appendix, and similar trends are observed. We plan to merge the two experiments and put them in the main text. We hope the experiment on the text data may provide a more convincing conclusion.
**Limitation of our work**: We outline potential societal impact in the appendix and mention possible future directions in the conclusion section.
**Regarding computing global importance $\alpha$ and choices of kernels**: We discuss choices of kernel functions and the computation of global importance in Section 4 and 5 respectively, and we believe that any kernel choice in Section 5 can be combined with any method in Section 4. Specifically, users may specify a kernel according to their domain knowledge related to models and applications. Next, for methods 1 and 3, users need to fix the kernel and solve the corresponding RKHS regression. For method 2, they only need to compute the derivative of training samples with respect to the loss.
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal
Comment: Thanks for your rebuttal that clarifies my questions. I decide to keep my current score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules | Accept (poster) | Summary: In this work, the authors investigate tokenization and decoder in masked graph modeling (MGM), and present Simple GNN-based Tokenizer (SGT) as well as stacking GINE and GraphTrans (GTS) as an efficient decoder. SGT uses non-trainable linear graph aggregation to get tokenization of graphs. It also introduces a variant of remask for the encoder and decoder of GTS. Experiments comparing to other SSL baselines show the proposed method achieves superior performance on multiple molecular property benchmarks. Comprehensive ablations studies also demonstrated the effectiveness of SGT as well as GTS in molecular representation learning compared to other design choices in MGM.
Strengths: 1. The paper is well-written and easy to follow overall.
2. Systematic study of tokenization and decoder can be a valuable contribution to MGM for molecules. Since multiple works have applied masked modeling in molecular representation learning but few have extensively studied the design space of the MGM.
3. The paper includes comprehensive investigations of MGM design choices in section 4, which validates the effectiveness of the proposed SGT and decoder.
4. Technical details as well as complete results are reported in supplementary material, demonstrating the experimental soundness.
Weaknesses: From my perspective, the major weakness is the miss of quantum-mechanics-related benchmarks. QM-related properties play an important role in applications. Thus performance on QM benchmarks can further help evaluate the effectiveness of the proposed method.
Please find more details in the "Questions" below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Major questions:
1. SGT applies only linear aggregation operations to graphs without nonlinear update functions. What will happen if nonlinearity is included on SGT?
2. In Finding 2, the authors write "it indicates that remask constrains the encoder’s efforts on graph reconstruction, allowing it to focus on MRL" which suggests an interesting mismatch between graph reconstruction and MRL. Can the authors discuss more about why the mismatch exists and how it provides insights into designing self-supervised learning framework for MRL.
3. Table 3b suggests that pretrained GNN tokenizer benefits from depth, however, the proposed SGT achieves the best performance with one single layer. What could be the cause of such observation?
4. The work benchmarks on multiple MoleculeNet and DTA tasks. There are still important quantum-mechanics-related benchmarks (e.g., QM9, MD17) that are not included. Though the proposed method built upon 2D graphs is unlikely to compete with SOTA equivariant GNNs that leverage 3D information. Benchmarking against other SSL baselines on QM benchmarks provides a more comprehensive evaluation.
5. In Table 1, why are Mole-BERT and GraphMAE included as baselines in GINE encoder but not in GTS encoder setting?
Minor questions:
1. In section 2.1, the authors mention using mean pooling to obtain representation of the subgraph. Some works also integrate max or summation pooling. Will that affect the performance in molecular property predictions?
2. From my perspective, the authors explain too many details regarding BRICS in section 2.2. Yet BRICS is not the focus of the work. The authors may consider truncating the paragraphs.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have included reasonable discussions regarding potential limitations in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1.** The performance on the quantum mechanics benchmarks, e.g., QM9, MD17.
**Response:** Thanks. We have included the results on the QM7, QM8, and QM9 datasets in our updated Table 3. This experiment reuses the model checkpoints and settings in Table 5. We observe that SimSGT consistently outperforms representative baselines of GraphCL, GraphMAE, and Mole-BERT.
Note that, our results on the QM9 differ in scale from Uni-Mol[7]. This is because of a different setting: we use all the 12 tasks in QM9, while Uni-Mol uses 3 tasks.
> **Q2.** SGT applies only linear aggregation operations to graphs without nonlinear update functions. What will happen if nonlinearity is included on SGT?
**Response:** Thanks for the question. If nonlinear update functions (both nonlinear activation function and weights) are included, two potential scenarios arise:
* **GNN tokenizer is pretrained.** This is the same as the pretrained GNN-based tokenizer reported in Table 3 (b).
* **GNN tokenizer is not pretrained.** This will lead to failed MGM pretraining. To show this, let’s further consider two cases:
* **Stop-gradient is used.** The un-pretrained GNN tokenizer, having random weights, will likely generate low-quality tokens, leading to failed pretraining.
* **Stop-gradient is not used.** This will lead to representation collapse, as discussed in [1,2]. The tokenizer branch and autoencoder branch will output constant values to minimize the loss, but constant values are not meaningful representations.
> **Q3.** More discussions on why the mismatch between graph reconstruction and MRL exists and how it provides insights into designing SSL framework for MRL.
**Response:** The mismatch exists because the graph reconstruction objective demands the graph autoencoder’s last layer to output raw features (*e.g.*, nodes/edge features). However, these raw features are not optimal for graph prediction tasks, the primary goal of MRL.
To deal with the mismatch, previous works in CV [3,4] have noticed that the autoencoder’s encoder outputs are representations of high-level semantics, when 1) pairing with a sufficiently expressive decoder and 2) excluding the masked elements in encoder. These masked elements are only included in the decoder, to prevent the encoder from learning to reconstruct raw features.
To adapt these insights for MRL, we use a sufficiently expressive decoder in a similar manner. However, excluding masked nodes in encoder is nontrivial because it can easily corrupt the graph structure. This leads to significant performance drops in our preliminary experiments. To avoid corrupting structures, we decide to: 1) retain the masked nodes in the encoder’s GNN layers, which explicitly use graph structures; and 2) only exclude masked nodes in the encoder’s Transformer layers, which do not use graph structures.
> **Q4.** Why pretrained GNN tokenizer benefits from depth, but SGT does not?
**Response:** Thanks for the question regarding the differing behavior between the pretrained GNN tokenizer and SGT. We present our explanations below:
**Pretrained GNN Tokenizers.** The performance of pretrained GNN-based tokenizers relies heavily on the pretraining process. As evidenced in Table 3(b), changing the pretraining methods, such as from GraphMAE to GraphCL, leads to noticeable difference in performance. Additionally, since the pretraining methods we examined were designed for deep-layer GNNs, these tokenizers naturally perform best with increased layers. The pretrained weights seem to alleviate over-smoothing issues that could otherwise arise with depth.
**SGTs.** SGTs are not affected by pretraining. Their performance depends solely on the graph operator used. We conjecture that the decrease in performance seen with added depth in SGT is due to the over-smoothing effect that is often found in deep GNNs. Nevertheless, it's essential to emphasize that despite facing the over-smoothing issue, SGT demonstrates better or comparable performancse among all compared tokenizers.
Thanks for the insightful observation. We will include this discussion in the limitation section in our revised submission.
> **Q5.** Missing baselines of running Mole-BERT and GraphMAE with the GTS encdoer.
**Response:** Thanks for the suggestion. We have now included Mole-BERT and GraphMAE results when using GTS encoder in our newly uploaded PDF file. As Table 1 shows, SimSGT maintains better performances on average.
Our initial submission did not include these results due to time constraints for reproducing the baselines.
> **Q6.** The authors mention using mean pooling to obtain subgraph representations. Will using max/sum pooling affect the performance?
**Response:** Thank you for the suggestion. We use mean pooling to follow the method of obtaining graph representations in [5,6]. We agree that different pooling strategies may impact performance. Hence, we add results for MGSSL tokenizer using sum and max pooling in our updated Table 4. The results show that mean pooling yields the highest performance, affirming the soundness of our earlier experiments.
> **Q7.** The authors explain too many details regarding BRICS in section 2.2.
**Response:** Thank you for your suggestion. We acknowledge that the explanation of BRICS may be excessive. We will move some details to Appendix, allowing us have a more focused narrative in the main paper.
**Reference:**
[1] Exploring Simple Siamese Representation Learning. In CVPR 2021.
[2] Understanding Self-Supervised Learning Dynamics without Contrastive Pairs. In ICML 2021.
[3] Masked autoencoders as spatiotemporal learners. In NeurIPS 2022.
[4] Masked Autoencoders Are Scalable Vision Learners. In CVPR 2022.
[5] Graph Contrastive Learning with Augmentations. In NeurIPS 2021.
[6] Strategies for Pre-training Graph Neural Networks. In ICLR 2020.
[7] UNI-MOL: A UNIVERSAL 3D MOLECULAR REPRESENTATION LEARNING FRAMEWORK. In ICLR 2023
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in answering my questions and additional experiments. I remain positive about the manuscript and its contributions.
---
Reply to Comment 1.1.1:
Title: Thanks for the feedback!
Comment: Thank you for your positive feedback on our manuscript. We're glad our efforts to address your questions were satisfactory. Should you have any further inquiries or need additional clarification, please don't hesitate to ask. We appreciate your support. | Summary: The paper examines the effectiveness of tokenizer and decoder in the self-supervised representation learning of molecular graph following masked auto-encoding framework. Specifically, the paper adopts GraphTrans architecture for its encoder and a smaller GraphTrans for its decoder. A simple GNN-based architecture serves as tokenizer in the framework. The proposed method is pre-trained on 2 millions molecules and is evaluated using 8 classification datasets. The ablation analysis is conducted for the proposed architecture.
Strengths: - The paper is well-written and easy to follow.
- The idea of improving tokenizer and decoder in self-supervised learning framework is well-motivated.
Weaknesses: - The proposed method utilizes a simple GGN-based architecture to learn the feature embeddings. However, these feature embeddings only serve as target for masked auto-encoding. I think it is misleading to call it tokenizer.
- The performance of GraphMAE [1] and Mole-BERT [2] in Table 5 is lower than one reported in the original paper.
[1] Graphmae: Self-supervised masked graph autoencoders. Hou et al. In KDD 2022.
[2] Mole-BERT: Rethinking pre-training graph neural networks for molecules. Xia et al. In ICLR 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - My main concern is about the reported performance which may cause unfair comparison.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1.** The proposed method utilizes a simple GGN-based architecture to learn the feature embeddings. However, these feature embeddings only serve as target for masked auto-encoding. I think it is misleading to call it tokenizer.
**Response:** Thank you for your insightful comment on the module names used in our work. We concur that the term "tokenizer" traditionally refers to the module defining both input and target in NLP tasks. However, prior research in computer vision and molecular studies [1,2,3] also utilizes the term "tokenizer" for modules that solely specify the reconstruction targets. Hence, we chose to align our terminology with these works for consistency.
> **Q2.** The performance of GraphMAE and Mole-BERT in Table 5 is lower than one reported in the original paper.
**Response:** Thank you for bringing our attention to the performance metrics. Following your suggestions, we have updated the reported performance of GraphMAE and Mole-BERT in Table 1 of the newly uploaded pdf, based on the original scores in their respective papers. Despite these adjustments, SimSGT still outperforms all the baselines w.r.t. average performance.
Besides, we would like to elucidate our decision to reproduce the baselines. Our primary objective was to ensure a fair and controlled comparison. Given the potential variances that can arise due to software differences, hardware variations, or even random seed discrepancies, direct reuse of reported results from other papers might introduce unintended biases. By reproducing the baselines in the same environment and under identical conditions as our proposed method, SimSGT, we aimed to minimize these external influences and offer a more genuine head-to-head comparison. Now, we keep both the reproduced and original performance.
Here we also try to interpret the discrepancies between reproduced and original scores. The inconsistency between our reproduced results and the original scores can be partially attributed to numerical precision in computation. This issue is common when using GNNs because the scatter operation is non-deterministic in PyTorch. Some relevant discussions can be found by searching “reproducible scatter random” under the issue page of PyG’s github repo.
**Reference:**
[1] BEiT: BERT Pre-Training of Image Transformers. In ICLR 2022.
[2] BEIT V2: Masked Image Modeling with Vector-Quantized Visual Tokenizers. In arxiv 2022.
[3] An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling. In CVPR 2023
[2] Mole-BERT: Rethinking Pre-training Graph Neural Networks for Molecules. In ICLR 2023.
---
Rebuttal 2:
Title: Follow-up Discussion
Comment: Thank you for your valuable feedback on our submission, particular your suggestions to **compare with GraphMAE and Mole-BERT's original results** and to **clarify the naming strategy of the tokenizer**. These insightful suggestions better strengthen our claims.
We hope that these improvements will be taken into consideration. If we fully address your concerns about our paper, we would be grateful if you could re-evaluate our paper. If you have additional concerns, we remain open and would be more than happy to discuss with you.
---
Rebuttal Comment 2.1:
Comment: Thanks for the authors' rebuttal. I have read all reviews. I think my concerns have been resolved and decide to increase my rating.
---
Reply to Comment 2.1.1:
Title: Thanks for the feedbacks
Comment: Thank you for recognizing our efforts in rebuttal. We appreciate your decision to increase the rating of our paper. Your feedbacks have been invaluable to our work. | Summary: This paper mainly revisits the graph tokenizers and the graph autoencoders in Masked Graph Modeling (MGM) frameworks. Authors examine the roles of different tokenizers as the MGM’s reconstructions targets and propose a simple GNN-based tokenizer method and a decoding strategy. The experiment results show the effectiveness of the designed tokenizer and decoding strategy, as well as some insightful findings.
Strengths: 1. The paper is well written and mostly easy to understand.
2. The revisiting part is clear and the summary of current tokenizers and autoencoders are systematic.
3. The code is provided by an anonymous link.
Weaknesses: 1. The motivation for the proposed method is somehow unclear and the novelty is limited. The revisiting part takes a lot of space to introduce the existing tokenizers and decoding strategies. There’s no explicit insight drawn from the revisiting results, which can be helpful for designing the proposed method.
2. The technical contributions of this work are less and somewhat incremental. The proposed methods are all based on the current MGM framework and only change a little about the tokenizer and remask strategy. Concretely, the proposed simple GNN-based tokenizer is based on the previous pretrained GNN-based tokenizer concept and the remask-v2 has already been widely used in many scenarios to prevent transformer layers from processing masked items.
3. From the experiments in Table3, the improvements of the proposed tokenizer and decoder are limited. In Table3 (b), when the depth of tokenizer is 4 or 5, the performance of proposed tokenizer is even worse than previous tokenizers. Additionally, there’s no standard error reported.
4. The comparison of Table5 and Table6 is somehow unfair. The GNN encoders or decoders of baselines may be different from this work. So if the GNN encoders of this work are strong, the higher performance may be because of the encoders, rather than the proposed tokenizers and decoding strategy of this work.
5. There’s no theoretical analysis of the proposed method.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What are the real differences between the proposed tokenizer and the pretrained GNN-based tokenizer? Can the proposed method be considered as a special case of the previous GNN-based tokenizer? Authors are encouraged to investigate more about the insights behind the different tokenizers.
2. In the experiments, the authors use ZINC15 as the pretrained dataset. What if more pretrained datasets are used? Will the proposed method still perform well in different size of pretrained dataset? Authors are encouraged to do more experiments or do some theoretical analysis about why the proposed method work well than others.
The response from authors addressed these questions and clarified the confused parts in the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1.** The motivation ... is somehow unclear and the novelty is limited ... There’s no explicit insight drawn from the revisiting results, which can be helpful for designing the proposed method.
**Response:** We appreciate your insights, but wish to respectfully emphasize our contribution and novelty. Our primary objective is not just to introduce another model, but to critically analyze and rethink the design paradigms (especially tokenizer and decoder choices) in MGM for molecule SSL. Our motivation with novelty are summarized in the following points:
1. Systematic Scrutiny: We believe we're pioneering in our effort to meticulously scrutinize the prevalent MGM design choices for molecule SSL. Our work sheds light on the inherent advantages and disadvantages of both tokenizers and decoders.
2. Simple yet Effective Design: Our SimSGT represents a simple yet effective approach to address identified limitations in tokenization and decoding. While simplicity is at its core, its effectiveness in addressing complex challenges cannot be understated.
3. Extensive evlauation: We've conducted a comprehensive set of experiments to underscore the superiority of SimSGT.
We will make these points clearly in the revision. We kindly urge the reviewer to re-assess our contributions to molecule SSL.
> **Q2.** The technical contributions of this work are less and somewhat incremental. The proposed methods are all based on the current MGM framework and only change a little about the tokenizer and remask strategy ...
**Response:** Thanks. Our focus is on **rethinking** MGM design choices for molecules. We acknowledge that most individual components are from previous works, although our implementation can be different. Moreover, the main contribution of this work is not any single component, but the evaluation and the unique composition of these components. Through comparisons in Table3, Table4, Figure5, and Figure6, we demonstrate the advantages of our particular design choices.
> **Q3.** In Table3(b), when the depth of tokenizer is 4 or 5, the performance of proposed tokenizer is even worse than previous tokenizers.
**Response:** Thank you for noting the performance at specific depths in Table3(b).
The tokenizer's depth is a hyperparameter, and SGT's effectiveness should be judged by its maximum or mean performances. In Table3(b), SGT shows the highest max and mean performances. Regarding the different behavior between pretrained GNN-based tokenizers and SGT concerning depth, we have a discussion in the response to Q4 of Reviewer Qgfd.
> **Q4.** The comparison of Table5 and Table6 is somehow unfair. The GNN encoders or decoders of baselines may be different from this work ...
**Response:** We agree that a fair comparison is important, and we have taken measures to ensure fairness:
* **Encoder.** To ensure fair comparison, we have 1) categorized methods according to encoder types in Table5; 2) reported the performance of SimSGT, GINE, which uses the same encoder as the baselines, in Table6.
* **Decoder.** Recall that, we have compared MGM baselines using our proposed decoder and re-masking strategy in Table3(b). In Tables5 and 6, we deliberately retain the original decoders of MGM baselines. Altering this would introduce multiple variables to control, thereby complicating the comparison. By keeping the original design, we maintain a clear comparison.
> **Q5.** There’s no theoretical analysis ... Authors are encouraged to do more experiments or do some theoretical analysis about why the proposed method work well than others.
**Response:** We agree that theories can bring insights to research. However, we respectfully argue that empirical studies offer equally valuable insights.
In this submission, we provide extensive experiments to validate our findings and designs in both decoders and tokenizers. For examples:
* In Section 4.1, our experiments reveal that “a sufficiently expressive decoder with remask decoding is crucial for MGM”, which is not recognized by previous MGM works;
* In Section 4.2, our experiments reveal a surprisingly simple but effective method for molecule tokenization: “Single-layer SGT outperforms or matches other tokenizers.”
Through sharing these findings, we hope to provide insights that may be valuable to other researchers. We welcome further discussion on this subject.
> **Q6.** In experiments, the authors use ZINC15 as the pretrained dataset ... Will the proposed method still perform well in different size of pretrained dataset? ...
**Response:** Thanks for the advice about more datasets. We have indeed used different datasets for pretraining: ZINC15 (Table 5, 50 thousand molecules) and GEOM (Table 6, 2 million molecules). Note that, these datasets meet the proposed size requirement. On both datasets, we observe that SimSGT shows improvements when pretrained.
**Reference:**
[1] Masked Autoencoders Are Scalable Vision Learners. In CVPR 2022.
[2] Masked autoencoders as spatiotemporal learners. In NeurIPS 2022.
---
Rebuttal 2:
Title: Follow-up Discussion
Comment: Thank you for your valuable comments and suggestions on our submission. Your suggestions to 1) **clarify the motivation and contribution of this work**, 2) **illustrate the difference between our tokenizer SGT and the pretrained GNN-based tokenizer**, 3) **verify our method on more than one pretraining datasets**; and 4) **elaborate the baselines' encoders and decoders in Table 5 and Table 6** have helped to substantially improve the coherence and significance of our submission. We hope that these improvements will be taken into consideration.
If our response has resolved your concerns on our paper, we will greatly appreciate it if you could re-evaluate our paper. Should you have any further questions or need additional clarification, please know that we are eager and prepared to continue our discussions.
---
Rebuttal Comment 2.1:
Title: Typo Correction and Additional Results
Comment: We apologize for a typo in our previous rebuttal response. There was a confusion regarding the dataset sizes of ZINC15 and GEOM in our original response to **Q6**. Here we want to correct the response, and mention our new results on the Quantum-Mechanics benchmark.
> **Q6.** In experiments, the authors use ZINC15 as the pretrained dataset ... Will the proposed method still perform well in different size of pretrained dataset? ...
**Response:** Thanks for the advice on testing more pretraining datasets. We have indeed used different datasets for pretraining: ZINC15 (Table 5, 2 million molecules) and GEOM (Table 6, 50 thousand molecules). Note that, these datasets meet the proposed size requirement. On both pretraining datasets, we observe that SimSGT shows improvements over the baseline methods in downstream tasks.
Additionally, we have improved the diversity of our downstream datasets, further demonstrating the effectiveness of SimSGT. In Table 3 of our updated PDF file, we have added new results on a Quantum-Mechanics benchmark: QM9 dataset. The new results re-use our checkpoints and experimental settings of the ZINC15 dataset. We observe that SimSGT significantly outperforms baselines in this new benchmark.
Finally, we kindly invite the Reviewer to re-assess the rating of our paper, taking into account the improvements made during the rebuttal process. If you have any further concerns or questions, we are more than happy to discuss with you.
---
Rebuttal 3:
Title: Inquiry on Additional Feedback
Comment: Thanks for your constructive feedback on our paper. We kindly inquire whether there may exist any additional concerns or unresolved questions that might be impeding the paper's attainment of a higher rating. We are available for any further clarifications or discussions!
---
Rebuttal Comment 3.1:
Comment: Thank you for the response, which is very helpful for understanding the value of this paper, and addressing the confused parts. I am increasing my rating to boardline accept.
---
Reply to Comment 3.1.1:
Title: Thank you for the feedback!
Comment: Thank you for your insightful review and the positive feedback of our work. Your comments have greatly improved the quality and clarity of our paper. We appreciate your support! | Summary: The authors attempt to categorize existing approaches for pretraining neural networks on molecular graphs and assess their contributions to pretraining quality. They then propose a new strategy for pretraining molecular graphs and compare it to existing results.
Strengths: I was very impressed by this paper: the research was well-motivated, and the approach used was novel. I also found the paper informative and easy to read: the authors clearly describe how they came to the conclusions they did and what motivated their explorations.
Weaknesses: Maybe I was looking in the wrong places, but I struggled to find something to criticize here. The only thing that I could find is that the SimSGT framework is strongly reminiscent of the famous BYOL work (https://doi.org/10.48550/arXiv.2006.07733) and a short discussion of the similarities between the two would improve the paper.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: This is primarily a question for future work and my own academic interest in the field. In many practical applications for learning on molecular graphs SOTA approaches typically consist of regression on classical fingerprint techniques, combined using ensembling methods (note that this typically isn't well-represented on public leaderboards, often due to the fact that these datasets can become overfit and that there is substantial research interest in neural approaches). It seems like modifying the methods proposed here to use fingerprint-based methods instead of the GNN-based tokenizer, e.g. by enumerating the Morgan fingerprints an atom takes place and hashing, could be a fruitful research direction. I'd be curious to hear the authors thoughts on this, or alternative deterministic featurizations to the GNN-inspired tokenizer.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitatinos are appropriately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We're genuinely pleased to hear that you found the research well-motivated and the approach novel. Your positive feedback on the paper's clarity and informativeness means a lot to us. The constructive feedback provided will undoubtedly help us further refine and improve our work. We appreciate your positive remarks and thoughtful engagement with our work.
> **Q1:** Maybe I was looking in the wrong places, but I struggled to find something to criticize here. The only thing that I could find is that the SimSGT framework is strongly reminiscent of the famous BYOL work and a short discussion of the similarities between the two would improve the paper.
**Ressponse:** Thank you for insightful observation regarding the resemblance between our SimSGT framework and BYOL [1]. Here we present a brief discussion on the similarities and distinctions between our method and BYOL, along with other contrastive learning methods.
SimSGT involves minimizing the distances between the outputs from two network branches (*i.e.*, the tokenizer branch and the autoencoder branch). This design is similar to the contrastive learning methods of BYOL [1], SimSiam [2], and BGRL [3], which also minimize the output differences between two network branches. However, a closer inspection reveals several critical distinctions between MGM and these methods. Firstly, MGM feeds uncorrupted data to the tokenizer branch and feeds corrupted data to the autoencoder branch, encouraging the autoencoder to reconstruct the missing information. In contrast, BYOL, SimSiam, and BGRL use corrupted data in both of their branches, constituting different training objectives. Secondly, while BYOL, SimSiam, and BGRL employ nearly identical architectures for their two branches, MGM can adopt distinctly different architectures for its autoencoder and tokenizer. In our best-performing experiment, the autoencoder has more than ten layers of GNNs and Transformers, while the tokenizer is a shallow single-layer network. Finally, MGM employs remask decoding to constrain the encoder's ability on reconstruction, which is not used in contrastive learning methods [1,2,3].
You can find a relevant discussion at Line 540-553 in the Related Works section (Appendix B).
> **Q2.** This is primarily a question for future work and my own academic interest in the field. In many practical applications for learning on molecular graphs SOTA approaches typically consist of regression on classical fingerprint techniques, combined using ensembling methods (note that this typically isn't well-represented on public leaderboards, often due to the fact that these datasets can become overfit and that there is substantial research interest in neural approaches). It seems like modifying the methods proposed here to use fingerprint-based methods instead of the GNN-based tokenizer, e.g. by enumerating the Morgan fingerprints an atom takes place and hashing, could be a fruitful research direction. I'd be curious to hear the authors thoughts on this, or alternative deterministic featurizations to the GNN-inspired tokenizer.
**Response:** Thank you for the inspiring comment! Indeed, fingerprint-based methods like Morgan fingerprints and ECFP [4] can also be used as graph tokenizers to provide reconstruction targets for MGM. Actually, these fingerprint-based methods can be seen as special GNNs with hashing functions as the update layer for node representations. Therefore, given their demonstrated effectiveness in many tasks, it's certainly compelling to employ them as graph tokenizers.
**Reference:**
[1] Bootstrap your own latent - A new approach to self-supervised learning. In NeurIPS 2020.
[2] Exploring simple siamese representation learning. In CVPR 2021.
[3] Large-scale representation learning on graphs via bootstrapping. In ICLR 2022.
[4] Extended-connectivity fingerprint. In Journal of chemical information and modeling 2010. | Rebuttal 1:
Rebuttal: We appreciate all the reviewers' efforts for reviewing this submission. Our submission has received diverse ratings, including one strong accept (8), one weak accept (6), one borderline accept (5), one borderline reject (4), and one reject (3).
We would like to thank all the reviewers for providing insightful comments and valuable suggestions. In resposne to the reviewer's request, we have uploaded a PDF file with updated experimental results. The updated experimental results include:
* **[Reviewer Qgfd, tR9n]:** The performances of Mole-BERT and GraphMAE when using the GraphTrans encoder, and their performances reported in the original paper. Present in new Table 1.
* **[Reviewer 9pKQ]:** The performances of two new baselines: S2GAE[1] and GraphMAE2[2]. Present in new Table 1.
* **[Reviewer Qgfd]:** Performances on a new downstream dataset of quantum mechanics: QM9 [3]. Present in new Table 3.
* **[Reviewer 9pKQ]:** The comparison of computational time. Present in new Table 2.
* **[Reviewer Qgfd]:** Testing motif-based tokenizer's performances when using max and sum pooling. Present in new Table 4.
Here we also present the response to one common question regarding the difference between our proposed Simple GNN-based Tokenizer (SGT) and the pretrained GNN-based tokenizers.
> **Q1. The proposed tokenizer and the pretrained GNN-based tokenizer**
>
> * **[Reviewer Lzf1].** What are the real differences between the proposed tokenizer and the pretrained GNN-based tokenizer? Can the proposed method be considered as a special case of the previous GNN-based tokenizer? ...
>
> * **[Reviewer** **9pKQ].** It remains unclear why removing the nonlinear update function in each GNN layer can train a better encoder.
**Response:** Thanks for your valuable insights. We've summarized the key technical distinctions between our SGTs and pretrained GNN tokenizers as follows:
1. **Nonlinearity Elimination.** Once nonlinear update functions are removed, SGT essentially is a linear combination of graph operators, while pretrained GNN tokenizers mostly adopt nonlinear graph convolution layers. Linear GNNs have been studied in previous works [5,6,7]. [5] theoretically show that simple GCN operators function as low-pass filters in the graph spectral domain. [6] proves that linear GNNs are universal approximators under some mild conditions. Additionally, [5,6] show that GNNs without nonlinearity can perform comparablely to conventional GNNs.
2. **Nonparametric & Non-trainable Nature.** SGT is a linear combination of graph operators without any trainable parameters, while the GNN tokenizers usually hold trainable parameters to optimize. These parameter- and training-tree characters make SGT more efficient and simpler to adopt, faciliating the subsequent pretraining of graph encoder.
3. **Nonlinearities in Conventional GNNs.** Conventional GNNs include nonlinear update functions to enhance the **potential** expressiveness [4]. However, the **real-world** expressiveness and performances of GNNs hinge heavily on the pretraining method and data. This is shown by results in Table3(b): changing the pretraining methods, such as from GraphMAE to GraphCL, leads to noticeable difference in performance.
**Why pretrained GNN-based tokenizers do not outperform SGT?**
We confer that a discrepancy exists between existing molecule pretraining methods and the objectives of molecule tokenizers. This is backed by the observation that pretraining method can largely influence the performances of pretrained GNN-based tokenizers: in Table 3 (b), changing the pretraining methods, such as from GraphMAE to GraphCL, leads to noticeable difference in performance. Indeed, only Group VQ-VAE [8] in the existing pretraining methods are designed for molecule tokenization, and it might need more exploration to define a "good" molecule tokenizer. This work can be part of this exploration.
At the current stage, considering that pretrained GNN-based tokenizers do not outperform the SGT, we chooce SGT in our framework, which also eliminates the need for costly pretraining.
**Reference:**
[1] S2GAE: Self-Supervised Graph Autoencoders are Generalizable Learners with Graph Masking. In WSDM 2023.
[2] GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner. In WWW 2023.
[3] Quantum chemistry structures and properties of 134 kilo molecules. In Scientific Data 2014.
[4] How Powerful are Graph Neural Networks? In ICLR 2019.
[5] Simplifying Graph Convolutional Networks. In ICML 2019.
[6] How Powerful are Spectral Graph Neural Networks. In ICML 2022.
[7] On graph neural networks versus graph-augmented mlps. In ICLR 2021.
[8] MOLE-BERT: RETHINKING PRE-TRAINING GRAPH NEURAL NETWORKS FOR MOLECULES. In ICLR 2023.
Pdf: /pdf/602a185f686927d28831009f65a87090b1e54a44.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a masked graph modeling framework called Simple GNN-based Tokenizer (SGT) for molecular graph analysis. Extensive experiments show the performance of the proposed method.
Strengths: 1. The article is well-written and easy to understand.
2. The proposed framework is attractive to this research community.
3. The experimental results further corroborate the authors' view.
Weaknesses: 1. The limitations of the previous methods are not explained clearly. What is the limited understanding of tokenizer and decoder?
2. Theoretical analysis is weak. The paper only provides experimental results to support its idea, but it remains unclear why removing the nonlinear update function in each GNN layer can train a better encoder.
3. In Table 5, the improvement of the proposed method over some baselines (e.g., RGCL) is incremental, which can not well support its effectiveness.
4. Lack of comparison of computational time.
5. Missing baselines [1,2].
[1] S2GAE: Self-Supervised Graph Autoencoders are Generalizable Learners with Graph Masking.
[2] GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1.** The limitations of the previous methods are not explained clearly.
**Response:** Thanks for your comments. Although the limitations of prior methods have been elucidated (as outlined in Lines 51-54 and Table 1, and supported by findings in Sections 4.1 and 4.2), we will clarify more:
1. Tokenizer: Most previous research does not consider the potential of existing motif-based fragmentation functions as tokenizers.
2. Decoder: Previous studies mostly adpot a linear or an MLP decoder for graph reconstruction, but unexploring more expressive decoders.
Furthermore, we summarize the limitations in the following table and add it in our revision.
**Table1: Potential limitation of tokenizers in previous works.**
| Tokenizers | Potential Limitation |
| -------------- | ------------------------------- |
| Node, edge | Low-level feature |
| Motif | Rely on expert knowledge |
| Pretrained GNN | Extra pretraining for tokenizer |
| Ours, SimSGT | - |
**Table2: Potential limitation of decoders in previous works.**
| Model | Decoder | Sufficiently Expressive | Remask | Avoid processing masked nodes? |
| ------------ | ---------------- | ----------------------- | ------ | ------------------------------ |
| Others | Linear, MLP | x | - | x |
| GraphMAE | Single-layer GNN | x | v1 | x |
| Ours, SimSGT | GTS-Small | Y | v2 | Y |
> **Q2.** In Table 5, the improvement of the proposed method over some baselines (e.g., RGCL) is incremental...
**Response:** We appreciate your feedback. We would like to respectfully highlight the leading performance of our SimSGT model, especially when juxtaposed with the most competitive baselines, as evidenced in the outlined tables:
**Table3**
| Dataset | BBBP | Tox21 | ToxCast | SIDER | ClinTox | MUV | HIV | BACE | Avg. | Improvement |
| --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---- | ----------- |
| Mole-BERT | 71.9±1.6 | 76.8±0.5 | 64.3±0.2 | 62.8±1.1 | 78.9±3.0 | 78.6±1.8 | 78.2±0.8 | 80.8±1.4 | 74.0 | |
| SimSGT | 72.2±0.9 | 76.8±0.9 | 65.9±0.8 | 61.7±0.8 | 85.7±1.8 | 81.4±1.4 | 78.0±1.9 | 84.3±0.6 | 75.8 | 1.8 |
**Table4**
| | Molecular Property Prediction | | | | | Drug-Target Affinity | | |
| -------- | ------------------------------ | ----------- | ----------- | ----------- | --------- | -------------------- | ----------- | --------- |
| | ESOL | Lipo | Malaria | CEP | Avg. | Davis | KIBA | Avg. |
| GraphMVP | 1.064±0.045 | 0.691±0.013 | 1.106±0.013 | 1.228±0.001 | 1.022 | 0.274±0.002 | 0.175±0.001 | 0.225 |
| SimSGT | 1.039±0.012 | 0.670±0.015 | 1.090±0.013 | 1.060±0.011 | **0.965** | 0.263±0.006 | 0.144±0.001 | **0.204** |
1. In Table 3, SimSGT significantly surpasses the strongest baselines, demonstrating an improvements of 1.8% in average ROC-AUC.
2. In Table 4, SimSGT consistently posts lower RMSE and MSE values than all the competing baselines across every datasets.
Moreover, owing to the difficulity of molecular self-supervised learning, existing literature [4,5,6,7] indicates steady and gradual advancements, rather than groundbreaking leaps as expected. Given the intricacies and nuanced progress inherent to this field, we kindly urge the reviewer to re-assess our contributions and the results we presented.
> **Q3.** Lack of comparison of computational time.
**Response:** Thank you for your valuable suggestion. To address this, we have incorporated the wall-clock pretraining time for SimSGT and key baselines in Table 2 of our updated pdf. Our findings indicate that:
1. SimSGT's pretraining time is on par with GraphMAE [5]. This efficiency is largely attributed to the minimal computational overhead of our SGT tokenizer.
2. In comparison to Mole-BERT [4], the prior benchmark in molecule SSL, SimSGT is approximately three times faster. The computational demands of Mole-BERT can be attributed to its combined approach of MGM training and contrastive learning.
This insightful comparison will certainly be integrated into our revision. We're grateful for your astute feedback, which has undeniably enriched our presentation.
> **Q4.** Missing baselines: 1) S2GAE: Self-Supervised Graph Autoencoders are Generalizable Learners with Graph Masking; 2) GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner.
**Response:** Thanks for the suggestion. We have included these the new baselines in our updated Table 1. We can observe that SimSGT maintains the better performances over baselines. Note that, the performance of S2GAE is worse than expected. This might be caused by the mismatch between its pretraining task (*i.e.*, link prediction) and MRL's objective (*i.e.*, graph classification). In Table 9, we similarly observe that using edge features in the reconstruction target leads to worse performances.
**Reference:**
[1] How Powerful are Graph Neural Networks? In ICLR 2019.
[2] Simplifying Graph Convolutional Networks. In ICML 2019.
[3] How Powerful are Spectral Graph Neural Networks. In ICML 2022.
[4] MOLE-BERT: RETHINKING PRE-TRAINING GRAPH NEURAL NETWORKS FOR MOLECULES. In ICLR 2023.
[5] GraphMAE: Self-Supervised Masked Graph Autoencoders. In KDD 2022.
[6] Let Invariant Rationale Discovery Inspire Graph Contrastive Learning. In ICML 2022.
[7] Self-supervised Graph-level Representation Learning with Local and Global Structure. In ICML 2021.
---
Rebuttal 2:
Title: Follow-up Discussion
Comment: Thank you for your thoughtful feedback on our submission, especially for advising us to 1) **clarify the limitations of previous graph tokenizers and graph decoders**, 2) **include new baselines of S2GAE and GraphMAE2**, and 3) **compare the computational time.** These valuable suggestions have improved the clarity and quality of our work. We hope that these improvements will be taken into consideration.
If our response has resolved your concerns on our paper, we will greatly appreciate it if you could re-evaluate our paper. We are also willing and ready to engage in discussions, if you have any further questions. | null | null | null | null | null | null |
Learning Domain-Aware Detection Head with Prompt Tuning | Accept (poster) | Summary: 1. Proposed a novel framework for domain-aware object detection with
a) a vision-language model-based backbone to extract highly generalized features
b) a domain-aware detection head by prompt tuning
2. Design the prompt includes domain-invariant tokens, specific tokens, the token for class, domain-related textual description
3. The domain-adaptive prompt tuning maintains a prompt buffer with an ensembled strategy, and the buffer is saved and used for inference.
Strengths: 1. The idea of this work is evident, reasonable, and well-expressed.
2. The generalized semantic knowledge makes practical help with prompt tuning.
3. The proposed method is evaluated on several benchmarks and obtains significant improvements compared with related works.
Weaknesses: 1. Some details about the methodology are not clear:
- I see that the box head is frozen when tuning the prompts, so the box head is trained with the baseline detector only using source data, right? Thus, the domain-aware head is only trained on the classification branch. Won’t the box regression accuracy be influenced when changing the domain? If I understood correctly, there seems to be no adaptation process to deal with the regression.
- The image regions rj is obtained from RPN with RoIAlign, so is each rj in fj=f(rj) already a fixed-sized feature patch? Since f is the frozen visual encoder, where are the features R={rj} extracted from?
2. How much extra inference time will be increased by introducing the textual encoder?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see the "Weaknesses"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment:
We sincerely appreciate the reviewer for the constructive feedback. We are encouraged that the reviewer finds our idea is evident and reasonable. We will explain your concerns point by point.
**Q1: I see that the box head is frozen when tuning the prompts, so the box head is trained with the baseline detector only using source data, right? Thus, the domain-aware head is only trained on the classification branch. Won’t the box regression accuracy be influenced when changing the domain? If I understood correctly, there seems to be no adaptation process to deal with the regression.**
A1: Thanks for raising a concern about training bbox head. The bbox head is trained with only source data. We observed that domain bias often impacts classification more than localization. In DAOD, domain bias primarily manifests as semantic variations. Box regression is semantic-agnostic. Hence a bbox head trained on the source domain is robust to unknown target domains. However, classification is semantic-relevant and is more affected by domain bias. Consequently, when a detector trained on the source domain is applied to the target domain, errors primarily stem from misclassifications. To explore this further, we randomly selected 100 images from the Cityscapes dataset and calculated the recall of GT bbox. The results indicated a successful localization rate of 90.7% (IoU > 0.5), whereas the correct classification rate was only 70.8%. Most of the GT can be localized by bbox head, while many are misclassified. As a result, we opt to share the bbox head and tune the classifier in DA-Pro. Our future research will explore more efficient strategies for tuning the bbox head, such as class-aware and domain-aware bbox heads.
**Q2: The image regions rj is obtained from RPN with RoIAlign, so is each rj in fj=f(rj) already a fixed-sized feature patch? Since f is the frozen visual encoder, where are the features R={rj} extracted from?**
A2: Thanks for pointing this issue out. r_j is the region proposal (box) obtained from RPN, and f_j = f(r_j) is a fixed-size feature patch inferred by the visual encoder f with RoIAlign. We will make the adjustments in the manuscript to prevent any potential ambiguity.
**Q3: How much extra inference time will be increased by introducing the textual encoder?**
A3: Thanks for your concern. After training, the learned prompts and their corresponding text embeddings will be stored. During inference, the textual encoder will not be invoked, and there is no additional inference time introduced.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, since the discussion stage is about to end, do you have any other questions or suggestions? We are happy to discuss with you. | Summary: This paper proposes a new domain adaptive object detection (DAOD) method named DA-Pro. Unlike previous methods, which ignore the domain bias in the detection head, DA-Pro applies the learnable domain-adaptive prompt to generate the dynamic detection head for each domain. To do so, the prompt is designed to be composed of domain-invariant tokens, domain-specific tokens, domain descriptions, and class labels. The proposed method is evaluated in three scenarios and shows favorable performance compared to existing methods (non vision-language-model (VLM) based) and a baseline method (VLM based).
Strengths: S1. The proposed method is reasonably designed.
S2. The proposed method shows favorable performance compared to existing non VLM-based DAOD methods and a VLM-based baseline method.
S3. The ablation study reveals the effectiveness of each proposed component.
S4. The paper generally reads well and easy to follow.
Weaknesses: W1. The novelty of the paper is a little weak. I do acknowledge that the paper has certain novelty in a sense that it reasonably integrates [4] into the DAOD task and the authors showed not only the proposed method achieved better performance than existing non VLM-based DAOD methods and the VLM-based baseline (table 1), but also the newly introduced components contributed to the performance gain (table 2). However, the contribution is rather straightforward extension given the previous work of [4] and the DAOD task.
W2. The proposed method is evaluated on rather limited datasets. All the datasets used are related to cityscape. In addition, K->C and S->C scenarios only deal with car class. The evaluation on wider variety of datasets such as PASCAL VOC, clipart, and Watercolor2k would make the paper more convincing.
W3. The details of the baseline method, which is very important because the proposed method can be fairly compared only to this baseline method due to the usage of strong VLM, is a bit unclear. I suggest to show the architecture of the baseline method as in Fig 2 and explicitly show the difference with the proposed method. I believe this would more clearly reveal which component in the proposed method is important. I wonder if the pseudo labels are used in the baseline method.
Minor:
W4. It is nice to have the ablation on the hand-crafted prompt. The paper says “clear” and “foggy” is used for Cityscape and Foggy Cityscape, respectively. I wonder how the performance changes if alternative words are used. Which words are used for the other scenarios?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please discuss the points that I raised in the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: There is no discussion on the limitation of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment:
We appreciate the reviewer for the valuable comments. Our response to the reviewer’s questions is as follows.
**Q1: The novelty of the paper is a little weak. I do acknowledge that the paper has certain novelty in a sense that it reasonably integrates [4] into the DAOD task and the authors showed not only the proposed method achieved better performance than existing non VLM-based DAOD methods and the VLM-based baseline (table 1), but also the newly introduced components contributed to the performance gain (table 2). However, the contribution is rather straightforward extension given the previous work of [4] and the DAOD task.**
A1: Thanks for your valuable concern. As an object detection method, the motivation of [4] is learning highly generalizable and discriminative prompt (shown in Eq.2) on training domain (single domain). When applied to DAOD task, it can only capture knowledge on the training domain and ignore the vital cross-domain information for DAOD. Due to domain bias, they achieve limited performance on target domain.
Our work overcomes this limitation in DAOD by enabling prompt to learn cross-domain information, which is considered as a interesting (Review 85mu), reasonable (Reviewer A2TR), promising direction (Reviewer x91n) to address a significant research problem (Reviewer eDFv). We introduce a novel concept of a domain-adaptive prompt, consisting of a domain-invariant token shared between both domains and domain-specific tokens unique to each domain. With the domain-invariant and domain-specific tokens to capture domain-shared and domain-specific knowledge respectively, our DA-Pro achieves better performance on the unlabelled target domain. Evaluation on C2F task shows that learnable prompt with the form of [4] achieves an mAP of 53.0, while our domain-adaptive prompt achieves 55.9, demonstrating better cross-domain performance.
**Q2: The proposed method is evaluated on rather limited datasets. All the datasets used are related to cityscape. In addition, K->C and S->C scenarios only deal with car class. The evaluation on wider variety of datasets such as PASCAL VOC, clipart, and Watercolor2k would make the paper more convincing.**
A2: Thanks for your constructive suggestion. We included three additional benchmark sets: Pascal to Clipart, watercolor, and comic. This expansion enables the method to be evaluated under more challenging domain shifts and in multi-class problem scenarios. Our proposed method surpasses the SOTA method (SIGMA++ with ResNet-101) with a weak backbone (ResNet-50) on all three additional benchmarks, showing effectiveness of DA-Pro.
| | Pascal to Watercolor | Pascal to Clipart | Pascal to Comic |
| --- | --- | --- | --- |
| DBGL(ResNet-101) | 53.8 | 41.6 | 29.7 |
| Baseline | 54.8 | 43.4 | 40.6 |
| FGRR(ResNet-101) | 55.7 | 43.3 | 32.7 |
| SIGMA++(ResNet-101) | 57.1 | 46.7 | 37.1 |
| DA-Pro(ResNet-50) | **58.1** | **46.9** | **44.6** |
*DBGL: Chen C, Li J, Zheng Z, et al. Dual bipartite graph learning: A general approach for domain adaptive object detection[C]. In CVPR, 2021.*
*FGRR: Chen C, Li J, Zhou H Y, et al. Relation matters: foreground-aware graph-based relational reasoning for domain adaptive object detection[J]. In TPAMI, 2022.*
*SIGMA++: Li W, Liu X, Yuan Y. SIGMA++: Improved Semantic-complete Graph Matching for Domain Adaptive Object Detection[J]. In TPAMI, 2023.*
**Q3: The details of the baseline method, which is very important because the proposed method can be fairly compared only to this baseline method due to the usage of strong VLM, is a bit unclear. I suggest to show the architecture of the baseline method as in Fig 2 and explicitly show the difference with the proposed method. I believe this would more clearly reveal which component in the proposed method is important. I wonder if the pseudo labels are used in the baseline method.**
A3: Thanks for your advise. The baseline adapts the detection framework of RegionClip with a domain classifier, where the prompt is generated from a hand-crafted template "A photo of [class][domain]”. The primary distinction between the baseline and DA-Pro lies in the design of the prompt within the detection head, as well as the utilization of two sets of constraint losses for tuning the prompt. And we will include a figure in the appendix to show the architecture of the baseline. Both the baseline and the proposed method are trained on the annotated source and the unlabelled target domain with classification, regression, and adversarial loss. After that, the backbone is frozen and we tune the proposed domain-adaptive prompt with two sets of constraints to learn domain-shared and domain-specific knowledge. We extensively discuss the impact of each proposed component in the table 3,4,5 of the appendix.
Since pseudo-labels are only utilized in the prompt tuning process, they are not employed in the baseline method.
**Q4: It is nice to have the ablation on the hand-crafted prompt. The paper says “clear” and “foggy” is used for Cityscape and Foggy Cityscape, respectively. I wonder how the performance changes if alternative words are used. Which words are used for the other scenarios?**
A4: Thanks for raising this point. For hand-crafted prompts, even subtle differences can lead to variations in performance, and precise descriptions often result in better performance. This phenomenon is also reported in CoOp. We evaluated the performance of various alternative words in the C2F scenario. In other scenarios, for instance, we use “game” and “real” in SIM10K to Cityscapes, “real” and “watercolor” in Pascal to Watercolor.
| Source domain | Target domain | mAP |
| --- | --- | --- |
| clear | foggy | **55.9** |
| cityscapes | foggycityscapes | 55.1 |
| clear | fuzzy | 54.7 |
| - | - | 53.4 |
---
Rebuttal Comment 1.1:
Comment: I appreciate the feedback from the authors.
> A1, A3
In my view, the design of the domain-invariant tokens, domain-specific tokens, and the domain-related textual description along with the class label is rather straight-forward application of the core ideas presented in [41] and [4] for DAOD task.
Probably, what is not straight-forward is how to make it work.
In such sense, the usage of pseudo labels may be one of the key factors of the proposed method.
What is interesting for me is that although the performance of Baseline is not that high (52.6 in C->F), the performance increases when the output of Baseline is used as pseudo labels as in equation 11.
I would like to hear the authors' opinion on this point, and happy to clarify the essential contribution of this work with the authors.
> A2
Thank you for the additional results. I think these results make the paper much stronger.
> A4. In other scenarios, for instance, we use “game” and “real” in SIM10K to Cityscapes, “real” and “watercolor” in Pascal to Watercolor.
What about KITTI → Cityscapes?
Overall, I became more positive on the paper, and I increased my rating accordingly.
---
Reply to Comment 1.1.1:
Comment: Thanks for your positive and insightful feedback. We really appreciate your constructive review and your precious time.
> A1,A3
>
One of our contributions is to explore how to optimize the domain-adaptive prompt so that it can capture the domains-shared and domain-specific knowledge. Indeed, in order to tune the proposed domain-adaptive prompt to work as expected, we introduce two unique constraints, where pseudo-labels are crucial for both constraints to hold.
Firstly, domain-invariant knowledge can be learned by correctly classifying on both domains. Motivated by this, we constrain the detection heads in the source and target domains to classify the input image as accurately as possible. As shown in the third last row in table 3 of the appendix, the domain-adaptive prompt has improved 2.2 mAP on the hand-crafted prompt (Baseline) with this constraint. The first term of Eq.11 belongs to this constraint and requires the target domain classifier to predict correctly on the target data. To achieve this, the pseudo labels on the target domain are necessary.
Second, the classifier that learns domain-specific knowledge in one domain should perform better than other domain classifiers when processing images in this domain. Inspired by this, we constrain the detection head generated by one domain to output higher confidence in its own domain than the other domain. Shown in the second last row in table 3 of the appendix, introduce this constraint further improve 0.4 mAP. And the pseudo labels are utilized in the corresponding loss of the second term of Eq. 11.
Meanwhile, the pseudo labels also function in the information entropy loss. And it boosts for 0.3, shown in the last row in the table 3 of the appendix.
> A4
>
In the original setting, we apply [’KITTI’, ‘cityscapes’] in K→C adaptation (61.4 mAP). The main difference between KITTI and Cityscapes lies in FoV (Field of View), which is difficult to describe with words. We think that when faced with complex and difficult-to-describe semantics, the domain difference is mainly learned by the learnable prompt and the help of the hand-crafted prompt is relatively limited. Therefore, we test alternative word pair: [highway, city], and the results are similar (61.1 mAP).
We hope we have addressed all of your concerns. Thank you! | Summary: Most existing methods incorporate a visual encoder (detection backbone) to mitigate the shift across the domain. This paper leverages domain adaptive prompts comprises of domain invariant tokens, domain-specific tokens, and domain-related textual description with class label. These domain adaptive prompts introduce detection heads across the domain having a backbone of vision language models, exploring its generalizability for adaptation tasks. Experiments include three domain adaptation scenarios for evaluation.
Strengths: + This paper is well-written and easy to follow
+ An interesting application in domain adaptation space, utilizing the vision language models and analyzing the importance of the detection head instead of the backbone (visual encoder)
+ Figures are helpful to understand the overall proposed method
Weaknesses: - Authors need to consider Pascal to Clipart, watercolor, and comic experiments. That allows the method to be evaluated in more challenging domain shifts and multi-class problem settings.
- Impact of uncertainty on pseudo labels? It would be interesting to see that in this problem setting, how uncertainty is helpful for a selection of pseudo labels instead of probabilities are taken into account above certain thresholds. Also how accurate are the pseudo labels?
- Evaluation set 1500 in Cityscapes to Foggy Cityscapes? Authors need to be consistent in the evaluation e.g. in the works [see SIGMA, TIA, etc], the evaluation set is 500 images with the highest level of density in Foggy Cityscapes.
- It would be interesting to see error bar plots on multiple runs
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see the weakness section for relevant questions
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are not mentioned explicitly. Authors are encouraged to add limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment:
We appreciate the reviewer for the valuable comments. We are pleased to see our idea being regarded as interesting. We will explain your concerns point by point.
**Q1: Authors need to consider Pascal to Clipart, watercolor, and comic experiments. That allows the method to be evaluated in more challenging domain shifts and multi-class problem settings.**
A1: Thanks for your constructive suggestion. We included these three additional benchmark sets: Pascal to Clipart, watercolor, and comic. Our proposed method surpasses the SOTA method (SIGMA++ with ResNet-101) with a weak backbone (ResNet-50) on all three additional benchmarks, showing effectiveness of DA-Pro.
| | Pascal to Watercolor | Pascal to Clipart | Pascal to Comic |
| --- | --- | --- | --- |
| DBGL(ResNet-101) | 53.8 | 41.6 | 29.7 |
| Baseline | 54.8 | 43.4 | 40.6 |
| FGRR(ResNet-101) | 55.7 | 43.3 | 32.7 |
| SIGMA++(ResNet-101) | 57.1 | 46.7 | 37.1 |
| DA-Pro(ResNet-50) | **58.1** | **46.9** | **44.6** |
*DBGL: Chen C, Li J, Zheng Z, et al. Dual bipartite graph learning: A general approach for domain adaptive object detection[C]. In CVPR, 2021.*
*FGRR: Chen C, Li J, Zhou H Y, et al. Relation matters: foreground-aware graph-based relational reasoning for domain adaptive object detection[J]. In TPAMI, 2022.*
*SIGMA++: Li W, Liu X, Yuan Y. SIGMA++: Improved Semantic-complete Graph Matching for Domain Adaptive Object Detection[J]. In TPAMI, 2023.*
**Q2: Impact of uncertainty on pseudo labels? It would be interesting to see that in this problem setting, how uncertainty is helpful for a selection of pseudo labels instead of probabilities are taken into account above certain thresholds. Also how accurate are the pseudo labels?**
A2: Thanks for raising this point. Considering that the probabilities are generated using a fixed prompt "A photo of [class]," aligning the model's predictions with these probabilities will result in the learnable prompt converging to the hand-crafted prompt. Unlike probabilities, pseudo-labels do not demand learning the relative distances to each category provided by hand-crafted prompts. Instead, they require the prompt to be as close to the correct category and as far from the incorrect categories as possible, thereby learning a more discriminative prompt. We conduct experiments on the three benchmarks. Replacing pseudo-labels with probability supervision suffers 0.6~1.6% degradation on performance.
| Supervision\Benchmark | c→f | k→c | s→c |
| --- | --- | --- | --- |
| probabilities | 54.3 | 60.8 | 62.1 |
| pseudo-label(ours) | **55.9** | **61.4** | **62.9** |
In C2F scenario, the generated pseudo-labels achieve an accuracy of 91.6%.
**Q3: Evaluation set 1500 in Cityscapes to Foggy Cityscapes? Authors need to be consistent in the evaluation e.g. in the works [see SIGMA, TIA, etc], the evaluation set is 500 images with the highest level of density in Foggy Cityscapes.**
A3: Thanks for pointing this out. We evaluate on 1500 set with three level of fog density to be consistent with [DA-Faster, VDD, DSS, SCAN, AT]. For [SIGMA, TIA, MeGA], we conduct additional evaluation on 500 images with the highest level, achieving 51.2% mAP over 8 class on C2F adaptation task.
| 1500 test set | mAP |
| --- | --- |
| DA-Faster | 32.0 |
| VDD | 40.0 |
| DSS | 40.9 |
| SCAN | 42.1 |
| AT | 50.9 |
| DA-Pro | **55.9** |
| 500 test set | mAP |
| --- | --- |
| MeGA | 41.8 |
| TIA | 42.3 |
| SIGMA | 44.2 |
| DA-Pro | **51.2** |
**Q4: It would be interesting to see error bar plots on multiple runs**
A4:Thanks for the advice. We conducted 5 runs in the C2F scenario. The results indicate that the mean mAP is $55.88 \pm 0.1$. We will include experiments on other benchmarks.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. Overall, the rebuttal looks fine, and I want to keep my score. It is recommended to include additional experiments in the paper along with the clarity required.
---
Reply to Comment 1.1.1:
Comment: We really appreciate your precious time. As you nicely point out, we will carefully include the additional experiments in the paper and improve the expression. Thanks for your insightful suggestion! | Summary: This paper designs a novel Domain-Aware Detection Head with Prompt Tuning (DA-Pro) framework for domain adaptive object detection. The motivation is learning the discriminative detector for each domain instead of reducing the domain bias as in the traditional DAOD methods. Specifically, the authors leverage the vision-language model (VLMs) to build domain-aware detection head. The domain adaptive prompt consists of the domain-invariant tokens, domain-specific tokens, and the domain-related textual description along with the class label. The experimental results seem to be effective on serval DAOD benchmarks.
Strengths: 1. This paper addresses an important research problem and the prompt tuning in vision is a popular direction.
2. This paper is well-written and the proposed method is easy to comprehend.
3. The experimental results appear to be effective compared to previous approaches.
4. The motivation for designing a domain-aware detection head for domain adaptive object detection is reasonable and the technical implementation is easy to follow.
Weaknesses: 1. One major concern of this paper is the differences between the proposed prompts and those in COOP and DetPro. They utilize similar learnable prompts for the text prompts. In line 145, this paper claim “both the proposed prompts in RegionCLIP [39] and DetPro [4] cannot model the domain-specific knowledge”. In my opinion, RegionCLIP just fills concepts into prompts so that it does not model domain-specific knowledge. However, DetPro learns the prompt in a certain domain and could indeed learn domain-specific knowledge.
2. The experimental results are not adequate. There are two benchmarks (KITTI to Cityscapes and SIM10K to Cityscapes) that are only evaluated on the ‘car’ category and do not adequately show the effectiveness of the proposed method.
3. Why the bounding box head is shared for both domains? The proposed domain-aware detection head may be further improved for the bounding box head.
4. The ClipRegion and DetPro have the ability to handle open-vocabulary object detection, this work degrades them to close-set object detection. Why not directly study the domain adaptive open vocabulary object detection that handles the domain shift and knowledge shift (recognizing new concepts) such as [A1]?
[A1] Rethinking Open-World Object Detection in Autonomous Driving Scenarios.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. In line 260, the baseline is RegionCLIP and DetPro with a domain classifier, so are the baseline results reported in the Tables based on RegionCLIP, DetPro, or both? It is recommended to clarify the setting of the baseline model.
2. What is the initialization for the detection model (e.g., the visual encoder and Bbox Head in Figure 2)? Which datasets are used to pre-trained the detection model? How about the backbone only being pre-trained in ImageNet as previous DAOD methods (e.g., DA-Faster RCNN, SIGMA, AT, etc.)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The ClipRegion and DetPro have the ability to handle open-vocabulary object detection, this work degrades them to close-set object detection and limits the real-world application of ClipRegion and DetPro.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment:
We sincerely thank you for your comprehensive comments and constructive advice. We are pleased to see our work being regarded as reasonable and addressing a crucial problem. We will explain your concerns point by point.
**Q1: One major concern of this paper is the differences between the proposed prompts and those in COOP and DetPro. They utilize similar learnable prompts ... domain-specific knowledge.**
A1: Thanks for your valuable concern. In the context of DAOD tasks, the shared knowledge between source and target domains is denoted as domain-invariant knowledge, and knowledge unique to a specific domain is denoted as domain-specific knowledge.
The motivation of CoOp and DetPro is learning highly generalizable and discriminative prompt (shown in Eq.2) on training domain (single domain). Ignoring the cross-domain difference, their prompts can only capture domain-specific knowledge on the training set. Due to domain bias, they have limited performance on target domain.
To enable prompt to learn cross-domain information, we introduce a novel domain-adaptive prompt, consisting of a domain-invariant token shared across domains and domain-specific token unique in each domain. With the domain-invariant and domain-specific tokens to capture domain-shared and domain-specific knowledge respectively, our DA-Pro gains better performance on the unlabelled target domain. Evaluation on C2F task shows that prompts of CoOp\DetPro achieve an mAP of 53.0, while our domain-adaptive prompt achieved 55.9, demonstrating better cross-domain performance.
**Q2: The experimental results are not adequate. K2C and S2C are only evaluated on the ‘car’ category and do not adequately show the effectiveness of the proposed method.**
A2: Thanks for raising this concern. K2C and S2C are two of the mainstream benchmarks in the field of DAOD. Previous methods are evaluated on them with single class 'car'. We only test on 'car' to fairly compare with other methods.
To further evaluate the effectiveness of DA-Pro, we added three extra benchmarks: Pascal to Clipart, watercolor and comic. They enable the method to be evaluated under more challenging domain shifts and in multi-class problem scenarios. Our DA-Pro surpasses the SOTA method (SIGMA++ with ResNet-101) with a weak backbone (ResNet-50) on all three additional benchmarks, showing effectiveness of DA-Pro.
||Pascal to Watercolor|Pascal to Clipart|Pascal to Comic|
|---|---|---|---|
|DBGL*|53.8|41.6|29.7|
|Baseline|54.8|43.4|40.6|
|FGRR*|55.7|43.3|32.7|
|SIGMA++*|57.1|46.7|37.1|
|DA-Pro(ResNet-50)|**58.1**|**46.9**|**44.6**|
\* denote backbone is ResNet-101
**Q3: Why the bounding box head is shared for both domains? The ... head.**
A3: Thanks for the constructive suggestion. In DAOD, domain bias primarily manifests as semantic variations. Box regression is semantic-agnostic. Hence a bbox head trained on the source domain is robust to unknown target domains. However, classification is semantic-relevant and is more affected by domain bias. Consequently, when a detector trained on the source domain is applied to the target domain, errors primarily stem from misclassifications. To explore this further, we randomly selected 100 images from the Cityscapes dataset and calculated the recall of GT bbox. The results indicated a successful localization rate of 90.7% (IoU > 0.5), whereas the correct classification rate was only 70.8%. Most of the GT can be localized by bbox head, while many are misclassified. As a result, we opt to share the bbox head and tune the classifier in DA-Pro. Our future research will explore more efficient strategies for tuning the bbox head, such as class-aware and domain-aware bbox heads.
**Q4: The ClipRegion and DetPro have the ability to handle open-vocabulary object detection, this work degrades them to close-set object detection. Why ... such as [A1]?**
A4: Thanks for the valuable suggestion. The DAOD task setting is closed-set object detection. To maintain consistency with other methods, we design DA-Pro to work within the closed-set detection. However, DA-Pro can also be modified as an open-vocabulary detection approach, with minimal additional overhead. For the source and target domain data, DA-Pro leverages them to tune the learnable prompt, enhancing the detection capability for classes present in both domains. For newly introduced classes, hand-crafted prompts can still be employed for detection, as the dictionary mentioned in [A1]. This work focuses on DAOD, and our future work could explore open-vocabulary domain adaptation, delving into the adaptation of new classes.
**Q5: In line 260, the baseline is ..., or both? It is recommended to clarify the setting of the baseline model.**
A5: Thanks for this point. The baseline adapts RegionClip, where the prompt is the hand-crafted "A photo of [class][domain]," as shown in the first row of Table 2. We will correct the typo on line 260 and update the baseline settings in the manuscript.
**Q6: What is the initialization for the detection model? Which datasets are used to pre-trained the detection model? How about the backbone only being pre-trained in ImageNet as previous DAOD methods?**
A6: Thanks for your question. In RegionCLIP, the visual and text encoder is initialized from CLIP and finetuned on CC3M(3M image-caption pairs without human annotations). Our work initializes the whole detection model with the weight of RegionCLIP.
After initialization, we pre-train the detection model on the annotated source and the unlabelled target domain with a domain classifier for each benchmark.
ImageNet-pretrained weights lack alignment with textual information. Previous methods address DAOD using visual alignment without considering interactions with text. Therefore they could initialize with ImageNet-pretrained weights. However, image embeddings must align with text embeddings in the CLIP-based framework. To this end, ImageNet pretraining weights are unsuitable for our work.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer, We have tried our best to address your concerns in previous responses. If you have any other questions or suggestions, we'd be more than happy to discuss them. | Rebuttal 1:
Rebuttal: **Comment:**
We thank all the reviewers for their insightful and valuable comments! Overall, we are encouraged that they find that:
1. The idea of learning domain-aware detection head is **reasonable** (Reviewer eDFv, Reviewer p2gs, Reviewer A2TR), **evident** and **interesting** (Reviewer 85mu).
2. This paper **addresses an important research problem,** the prompt tuning in vision is **a popular direction**. (Reviewer eDFv) and using VLM seems as **a promising direction** (Reviewer x91n).
3. The proposed method obtains **high performance** and **significant improvements** and is **easy to follow** (Reviewer eDFv, Reviewer x91n, Reviewer 85mu, Reviewer P2GS).
We have revised the manuscript according to the reviewers' comments. The main changes we made include:
1. We add experiments of three additional mainstream benchmarks.
2. We add experiments of prompt design, exploring effectiveness over pseudo-labels and hand-crafted domain-related textual tokens.
3. We add more details about the setting of the baseline model.
4. We revise the details in Figure 2 and add the architecture of the baseline.
Next, we address each reviewer's detailed concerns point by point. We hope we have addressed all of your concerns. Thank you! | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces VLM to domain adaptive object detection. To be specific, this paper uses highly generalized VLM as detection backbone, and adapts detection head instead. To learn the domain-invariant and domain-specific knowledge, this paper extends the prompt to domain-invariant and domain-specific ones, which are optimized with corresponding loss. Besides, to adapt domain-adaptive prompt for unsupervised object detection, this paper uses the CLIP to get pseudo labels and Prompt Ensemble to stabilise the training. Experiments on DAOD benchmark show better performance.
Strengths: - This paper shows that using highly generalized VLM seems as a promising direction with relatively high performance
Weaknesses: - The comparison method and evaluation could be improved. See questions for details.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Source and Target classifier in Figure 2 means similarity calculation, and drawn entity is misleading
- What is the motivation of prompt dl_d. I mean it seems to play the same role with domain specific prompts.
- An important comparison method [1] is missing, which as far as I know, is the SoTA of DAOD, which pushes the K2C performance to a very high level.
- CLIP initialized visual encoder is an image level extractor, I am afraid if it is suitable for detection.
- I am curious if the learned prompts are independently semantic. For example, K2C and S2C both learned domain specific prompts of domain C, and what performance will get if exchanging the corresponding prompts.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Comment:
We sincerely thank you for the valuable comments. We are encouraged to see that our work is recognized as a promising direction. We will explain your concerns point by point.
**Q1: Source and Target classifier in Figure 2 means similarity calculation, and drawn entity is misleading.**
A1: Thanks for your nice suggestion. The classifier in Figure 2 does indeed correspond to the similarity calculation from image embeddings to text embeddings. We intend to revise this entity in the manuscript, changing it to a bar graph to dipict the class similarity.
**Q2: What is the motivation of prompt dl_d. I mean it seems to play the same role with domain specific prompts.**
A2: Thanks for raising an important point.
Our motivation is to leverage textual descriptions of domains and introduce hand-crafted prior information to facilitate more efficient learning of domain-specific tokens. In order to learn highly discriminative prompts, CoOp adopts a prompt design of [learnable prompt, [class]], tuning the learnable prompt based on the class text description provided by humans. Inspired by this, we introduce hand-crafted "dl_d" to incorporate domain-related textual descriptions. The hand-crafted token offers a solid initialization which is discriminative. On this foundation, domain-specific tokens further learn the bias between the two domains, further enhancing the prompt's discrimination. The combination of both hand-crafted and learnable tokens yields superior results. We have conducted additional experiments on three different scenarios. Without dl_d token,using prompt t^d_i=[v^c_1][v^c_2]…[v^c_M][v^s_1][v^s_2]…[v^s_N][c_i] suffering 0.7~1.7% mAP. Experimental results demonstrate that dl_d assists in the convergence of domain-specific tokens and enhances the discrimination of the prompt.
| Prompt Design\Benchmark | C→F | K→C | S→C |
| --- | --- | --- | --- |
| [v^c_1][v^c_2]…[v^c_M][v^s_1][v^s_2]…[v^s_N][c_i] | 54.9 | 60.7 | 61.2 |
| [v^c_1][v^c_2]…[v^c_M][v^s_1][v^s_2]…[v^s_N][c_i][d_i] | **55.9** | **61.4** | **62.9** |
**Q3: An important comparison method [1] is missing, which as far as I know, is the SoTA of DAOD, which pushes the K2C performance to a very high level.**
A3: Thanks for the suggestion. It seems like you forgot to specify what method [1] refers to. To this end, we supplement a series of SOTA methods employing the same detection framework (Faster R-CNN) as our approach. Among these, the method PT (ICML 2022) indeed elevates the K2C performance significantly. However, in comparison with PT, our approach achieves even better performance.
|Method\Benchmark | C→F | K→C | S→C |
| --- | --- | --- | --- |
| MGA | 44.3 | 45.2 | 49.8 |
| TDD | 43.1 | 47.4 | 53.4 |
| PT | 47.1 | 60.2 | 55.1 |
| DA-Pro | **55.9** | **61.4** | **62.9** |
*MGA: Zhou W, Du D, Zhang L, et al. Multi-granularity alignment domain adaptation for object detection[C]. In CVPR, 2022.*
*TDD: He M, Wang Y, Wu J, et al. Cross domain object detection by target-perceived dual branch distillation[C]. In CVPR, 2022.*
*PT: Chen M, Chen W, Yang S, et al. Learning Domain Adaptive Object Detection with Probabilistic Teacher[C]. In ICML, 2022.*
**Q4: CLIP initialized visual encoder is an image level extractor, I am afraid if it is suitable for detection.**
A4: Thanks for your concern. In our work, we apply RegionCLIP as the visual encoder rather than CLIP. Due to CLIP's focus on learning image-text pairs, it lacks the capability to localize regions within images indeed. Therefore directly employing CLIP as the visual encoder in a detection framework would lead to unacceptable performance degradation. To address this challenge, RegionCLIP establishes region-text pairs and achieves alignment in the local feature space, effectively integrating CLIP into the object detection framework.
**Q5: I am curious if the learned prompts are independently semantic. For example, K2C and S2C both learned domain specific prompts of domain C, and what performance will get if exchanging the corresponding prompts.**
A5: Thank you for raising this intriguing question.
Semantic information is jointly represented by domain-invariant and domain-specific tokens for a given domain. The domain-specific token is learned by capturing the differences between domains based on the domain-invariant token. And the domain-invariant token captures the domain-shared knowledge, which is determined simultaneously by both the source and target domains. As a result, for the same target domain, if the source domain varies, the learned domain-invariant tokens also differ, leading to discrepancies in learned domain-specific tokens. We exchanged the learned domain-specific tokens between K2C and S2C and observed a severe performance drop in inference results. In fact, K2C and S2C possess distinct domain-shared and domain-specific knowledge. In this scenario, the learned domain-invariant and domain-specific tokens are distinct in K2C and S2C. Hence, directly swapping learned tokens leads to a significant performance decrease.
| Benchmark\Inference prompt | Default(DA-Pro) | exchange domain-specific token |
| --- | --- | --- |
| K→C | 61.4 | 17.2 |
| S→C | 58.7 | 12.9 |
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, we have tried to address your concerns in our earlier responses. If you have any additional questions or suggestions, we are very happy to discuss with you. | null | null | null | null | null | null |
Two Sides of One Coin: the Limits of Untuned SGD and the Power of Adaptive Methods | Accept (poster) | Summary: This paper shows mainly two things:
(a) SGD suffers from an exponential dependence on the initial stepsize if it is not tuned to be smaller than the learning rate. This exponential dependence is unavoidable.
(b) Methods with gradient normalization and running gradient sum normalization, such as Normalized SGD, AMSGrad, and AdaGrad, suffer no such exponential dependence on the smoothness constant.
The paper also presents a novel analysis of AMSGrad that removes the bounded gradients assumption.
Strengths: 1. The result on AMSGrad is new and a welcome addition to the literature.
2. The paper's emphasis on the benefits of normalization even in the deterministic setting is good, since this is a point quite overlooked in the community.
Weaknesses: 1. The result on the exponential dependence on the smoothness constant is a known consequence of another result in the literature. Under the assumption on the stochastic gradients $\mathbb{E} \|g(x)\|^2 <= 2 A (f(x)-f_*) + B \|\nabla f(x\||^2 + C$. Note that bounded stochastic gradient variance corresponds to $A=L$, $B=0$ and $C=\sigma$ (since bounded variance implies $E||g(x)||^2 <= ||\nabla f(x)||^2 + C <= 2 L (f(x)-f_*) + C$). The result of Theorem 2 in [1] gives for this choice ($A=L$, $B=0$, $C=\sigma$) a rate of $\frac{(1+\gamma^2 L^2)^K}{\gamma K} \delta_0 + L \gamma C$ where $\delta_0$ is the initial suboptimality. The lower bound is also known, see [2, Theorem 5].
[1] Khaled and Richtárik. Better Theory for SGD in the Nonconvex World. arXiv:2002.03329
[2] Vaswani, Benjamin Dubois-Taine, and Babanezhad. Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent. arXiv:2110.11442.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Please address the difference between your results and the results I've mentioned in the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments.
> **The result on the exponential dependence on the smoothness constant is a known consequence of another result in the literature...**
We thank the reviewer for pointing out the two relevant references and we will add more discussions about them in the revision. However, we are afraid that the reviewer might have overlooked some fundamental differences between our results and theirs.
**In reference [1]**, Theorem 2 states that: with a constant stepsize $\eta$, an upper bound of $\mathcal{O}\left(\eta + \frac{(1+\ell^2\eta^2)^T}{\eta T} \right)$ holds after $T$ iterations. There are key differences from our Theorem 1:
1. They consider constant stepsize $\eta$, while we consider polynomially decreasing stepsize $\eta/\sqrt{t}$.
2. The exponential terms are very different. Their result contains the term $(1+\ell^2\eta^2)^T$ with the exponent $T$, while ours is $(4e)^{2\eta^2\ell^2}$ with exponent $2\eta^2\ell^2$.
3. When $\eta$ is relatively large, their result diverges, whereas ours consistently converges.
Note that we have also discussed in Remark 1 Line 183 - 185: "We do not
consider constant stepsize, i.e., $\alpha = 0$, because it is well known to diverge even in the deterministic setting if the stepsize is agnostic to the problem parameter [1, 51].", highlighting the divergent behavior of the constant stepsize. In contrast, the diminishing stepsizes we consider are more interesting, and always lead to convergence despite the presence of the exponential constant.
**In reference [2]**, Theorem 5 states that there exists a quadratic function $f(x)$ such that, fixing the total number of iterations $T$, running gradient descent with stepsize $\eta\_t = \frac{\nu}{\ell}\left(\frac{\beta}{T} \right)^{t/T}$, with $\nu$ and $\beta$ being some constants, will satisfy $\\\|x\_{\tilde t + 1} - x^\*\\\| \geq 2^{\tilde{t}}\\\|x\_0 - x^*\\\|$ at an iteration $\tilde{t} = \Theta(T/\ln(T))$. The primary distinctions from our lower bound are:
1. The settings are different. They consider strongly convex setting with exponentially decreasing stepsizes that requires a prefixed $T$, but we consider nonconvex setting with polynomially decreasing stepsizes.
2. Their result features the exponential term $2^{\tilde{t}}$ at a specific iteration $\tilde{t}$, but their upper bound (as per Theorem 4 in [2]) actually does not include any exponential term at the last iteration $T$. This implies that after $\tilde t$ iteration, gradient descent can converge very quickly in their setting. In contrast, both our upper and lower bounds include an exponential term multiplied by $T^{-1/4}$ at the last iteration.
We hope our clarification on these differences addresses the reviewer's concern and we are happy to discuss more if the reviewer has further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
1. It is trivial to just instead use the constant stepsize $\frac{\eta}{\sqrt{T}}$ when the horizon $T$ is known, and obtain a very similar exponent to the one you have. Observe that $1+x \leq e^x$ and therefore $(1+ \ell^2 \frac{\eta^2}{T})^{T} \leq e^{\ell^2 \eta^2}$. I don't think the analysis with decreasing versus constant stepsizes with known time horizon is different to qualify as its own paper.
2. A strongly convex function is in a smaller function class than nonconvex functions, i.e. a lower bound that constructs a strongly convex function might be too tight, but is never too loose. Therefore, if it shows divergence in case the stepsize is misspecified, this holds for nonconvex objectives. And the main message of their result as applied to your setting would be the limit of stepsize misspecification by adaptivity, not the exact convergence rate.
---
Reply to Comment 1.1.1:
Comment: Thanks for actively engaging in the discussion.
> **It is trivial to just instead use the constant stepsize $\frac{\eta}{\sqrt{T}}$ when the horizon is known, and obtain a very similar exponent to the one you have. Observe that $1+x \leq e^x$ and therefore $(1+\ell^2 \frac{\eta^2}{T})^T \leq e^{\ell^2 \eta^2}$. I don't think the analysis with decreasing versus constant stepsizes with known time horizon is different to qualify as its own paper.**
Regarding the upper bound in [1], we agree that the exponential term looks similar provided that we know $T$ and can pick the constant stepsize $\eta/\sqrt{T}$. At the same time, we humbly believe that extending the analysis in [1] to the case of diminishing stepsize $\eta_t = \eta/\sqrt{t}$ is not completely straightforward. However, we would like to highlight that [1] only provides the upper bound, while in order to make a strict separation between untuned SGD and adaptive methods, it is necessary to establish the lower bound for untuned SGD. This is because the upper bound might be loose and alone **does not imply the exponential dependence on $\ell$ multiplied with $\epsilon^{-4}$ is tight**. This lower bound construction is the key argument of our work to showcase the fundamental difference between untuned SGD and adaptive methods. To our knowledge, our **lower bound construction is novel and distinct** from those in the literature and applies to the fundamental non-convex smooth setting (with classical polynomially diminishing stepsizes). Importantly, our upper and lower bounds match, providing a comprehensive analysis of nonconvex untuned SGD. We believe that this contribution gives conclusive evidence about the limits of parameter agnostic methods under this setting, a topic that was not sufficiently discussed in the optimization literature.
> **A strongly convex function is in a smaller function class than nonconvex functions, i.e. a lower bound that constructs a strongly convex function might be too tight, but is never too loose. Therefore, if it shows divergence in case the stepsize is misspecified, this holds for nonconvex objectives. And the main message of their result as applied to your setting would be the limit of stepsize misspecification by adaptivity, not the exact convergence rate.**
It is worth noting that the upper bound in [2] (Theorem 4 in the paper) does not include the exponential term in the last iterate $T$, which implies there is also no exponential term in the lower bound for the last iterate. Their lower bound's exponential term only emerges before an iterate $\tilde{t} < T$. If we adopt their lower bound case $f(x) = \frac{\ell}{2}(x-a)^2$ with our stepsize $\eta/\sqrt{t}$, it will lead to a lower bound of $\Omega\left((2\eta \ell)^{-2} \log^2\frac{\exp(\ell^2 \eta^2)}{\epsilon}\right)$, in which the exponential term will be **forgotten exponentially fast** and there exists a huge gap with the upper bound of $\mathcal{O} (\exp(\ell^2)\epsilon^{-1/4})$. The multiplication between the exponential term and $\epsilon^{-1/4}$ is especially important because $\ell$ is usually large and the target accuracy is small. To achieve a matching term in the lower bound is challenging and we carefully construct our lower bound with a **nonconvex** example. | Summary: The authors investigate the behavior of untuned SGD in the smooth nonconvex setting and show a new result on the convergence rate of SGD w.r.t. to gradient norm, where there is an exponential dependence on the smoothness constant. They further argue that the exponential dependence is unavoidable through a constructed class of 1-dimensional nonconvex functions. The paper then examines NSGD, AMSGrad and AdaGrad and shows that the exponential dependence can be avoided through adaptiveness, albeit without any information about the problem parameters.
Strengths: This paper offers an interesting theoretical perspective on the explosive gradient problem and the nonconvergence properties of SGD. The authors complement their theoretical results and ideas with numerical illustrations, and the paper is easy to follow. Their results seem well-justified, but I did not check their proofs in the appendix. I believe this work is of interest to the NeurIPS community.
Weaknesses: 1) The current state of numerical experiments seem preliminarily and is only done on one dataset MNIST on a small-network. I would like to see a more comprehensive investigation into larger practical networks, perhaps from [6, 24, 54].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) In figure 1, what is the size of each layer in the 3-layer neural network? What effect does over-parameterization have on the untuned SGD behavior?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have addressed their limitations through the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the recognition of our work.
> **The current state of numerical experiments seem preliminarily and is only done on one dataset MNIST on a small-network. I would like to see a more comprehensive investigation into larger practical networks, perhaps from [6, 24, 54].**
To complement our experiments on MNIST, we used the CIFAR-10 dataset (Krizhevsky et al., 2009), as suggested in [24], to train a 50-layer
ResNet (He et al., 2016), which is more common and large-scale than models in [6, 54]. In these experiments, we observed a similar exponential explosion phenomenon with SGD when using large stepsizes. In contrast, adaptive methods demonstrated robustness to changes in stepsizes. The detailed experimental results can be found in the PDF of our general response to all reviewers.
> **In figure 1, what is the size of each layer in the 3-layer neural network? What effect does over-parameterization have on the untuned SGD behavior?**
In Figure 1 of our paper, the 3-layer neural network is structured with layer sizes as follows: 784 (input size) $\rightarrow$ 512 $\rightarrow$ 256 $\rightarrow$ 10.
In our new experiment, when we use a 50-layer ResNet -- an over-parameterized neural network with more than 23 million parameters -- we observed behaviors consistent with our findings from smaller networks.
**References**
- Krizhevsky, Alex, et al. "Learning multiple layers of features from tiny images. 2009.
- He, Kaiming, et al. "Deep residual learning for image recognition." CVPR. 2016.
---
Rebuttal Comment 1.1:
Title: Response to Authors' Rebuttal
Comment: I acknowedge the response by the authors and have considered other reviewers' comments. I intend to keep my original evaluation. | Summary: This paper analyzes the complexity of finding an $\epsilon$-stationary point for untuned SGD and compares that with three families of adaptive methods - NSGD, AMSGrad and AdaGrad. Compared to previous convergence analysis results for tuned SGD and Adaptive methods: this work gets rid of several assumptions that hides the true convergence behaviour of these algorithms. Specifically, the authors do not assume the step-size for SGD to be dependent on the smoothness parameter (hence untuned) and do not assume bounded gradients for the adaptive methods. These leads to an interesting comparison for convergence of these algorithms.
Strengths: 1) The authors show that untuned SGD converges to an $\epsilon$ stationary point in $O(e^{\eta^2 l^2}\epsilon^{-4})$ iterations. Although this algorithm does have optimal dependece on $\epsilon$ , it has a disastrous exponential term wrt the smoothness parameter $\eta^2 l^2$, Hence, the assumption on bounded gradients or chosing the $\eta$ to depend on $l$ is problematic, because we may not have prior knowledge of $l$. They show that even for a smooth 1D-function, the assumption of bounded gradient is problematic. This is indeed true and the experiment in figure-1 supports this claim.
2) Adaptive gradient methods adjust their step-size based on observed gradients and hence can decrease the stepe-size when encountered with a large stepe-size preventing blow-up. This work does not assume bounded gradeint assumption for these methods and show that the convergence rate does not exponenetially depend on the smoothness parameter making it more stable than untuned SGD.
The core strength of the paper lies in removing assumption on bounded gradients revealing true dependency of these algorithms on smoothness parameter $l$ which highlights the advantage of adaptive methods over untuned SGD.
Weaknesses: Disclaimer: I am unfamilair with recent developments in this specific diretion. But still i really enjoyed reading this work and I feel it improves over existence convergence results. I have a few questions that i encountered but I don't list them as major weaknesses:
1) the constructed function $f(x) $ in Figure-2 does not look so. Is there an extension of the function outside the segment-1. If so, the author should mention that. Although the equation for sregment-4 and segment-1 are still provided, segment-2 and 3 are still missing.
2) From theorem-1, the main reason untuned SGD may blow up is that some initial $\eta \geq \frac{1}{l}$. But in practice SGD also converges with a small constant learning rate which does fall into this regime and in this regime it does not blow up. However, it is mostly observed that initial large learning rate SGD performs better in terms of generalization [1] . If practitioneers use a small enough (but contant) leanring rate $\eta \leq \frac{1}{l}$, then the whole issue of gradient blow-up can be avoided. Even in such practical cases, no prior knowledge of $l$ is required to select step-size $\eta$. So is it true that achieving generalization is the bottleneck in chosing step-size as large as possible ?
[1] Li, Yuanzhi, Colin Wei, and Tengyu Ma. "Towards explaining the regularization effect of initial large learning rate in training neural networks." Advances in Neural Information Processing Systems 32 (2019).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See the points above for the questions.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: i believe the whole problem of gradient blow-up can be avoided by just using small enough practical $\eta$ according to Thoerem-1. But that would hurt generalization in overparameterized networks. Hence, I believe comparison of constant step SGD $\eta$ and other adaptive gradient methods should also be done in terms of generalization and not only convergence. This would give us the full picture.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for the recognition of our work.
> **the constructed function $f(x)$ in Figure-2 does not look so. Is there an extension of the function outside the segment-1. If so, the author should mention that. Although the equation for sregment-4 and segment-1 are still provided, segment-2 and 3 are still missing.**
We thank the reviewer for mentioning this, and we will put the formal definitions for Segments-2 and 3 in the main body of the paper. In the current version, their definitions can be found in the proof for Theorem 2 (in Appendix B.2 of the supplementary material). They are constructed to connect Segment-1 to Segment-4 and guarantee the overall function is continuous and $\ell$-smooth.
> **From theorem-1, the main reason untuned SGD may blow up is that some initial $\eta \geq \frac{1}{l}$. But in practice SGD also converges with a small constant learning rate which does fall into this regime and in this regime it does not blow up. However, it is mostly observed that initial large learning rate SGD performs better in terms of generalization [1] . If practitioneers use a small enough (but contant) leanring rate $\eta \geq \frac{1}{l}$, then the whole issue of gradient blow-up can be avoided. Even in such practical cases, no prior knowledge of $l$ is required to select step-size $\eta$. So is it true that achieving generalization is the bottleneck in chosing step-size $\eta$ as large as possible?**
If we interpret the reviewer's question accurately, in the last sentence, the reviewer is asking "is it true that achieving generalization is the bottleneck in chosing step-size $\eta$ as _small_ as possible?".
From the optimization perspective, a reasonable stepsize ($\eta \leq 1/\ell$) avoids blow-up, but an excessively small stepsize results in slower convergence. This is also indicated in Theorem 1 that when $\eta \leq 1/\ell$, the bound includes an $\eta^{-1}$ factor, signaling a slowdown. The slow convergence is intuitively expected in practical scenarios when an extremely small stepsize is used. Hence, achieving generalization is not the only hindrance; a proper stepsize, e.g. $\Theta(1/\ell)$, is required to achieve fast optimization.
When taking generalization into account, the situation becomes more complicated and exceeds the scope of this paper. However, we agree that the relationship between stepsize and generalization is an intriguing research topic that has been actively and extensively examined (Jastrzębski et al., 2017; He et al., 2019; Li et al., 2019; Nakkiran, 2020).
It would an interesting future direction to balance optimization and generalization bounds in different stepsize regimes.
**References**
- Jastrzębski, Stanisław, et al. "Three factors influencing minima in sgd." arXiv preprint arXiv:1711.04623. 2017.
- He, Fengxiang, et al. "Control batch size and learning rate to generalize well: Theoretical and empirical evidence." NeurIPS. 2019.
- Li, Yuanzhi, et al. "Towards explaining the regularization effect of initial large learning rate in training neural networks." NeurIPS. 2019.
- Nakkiran, Preetum. "Learning rate annealing can provably help generalization, even for convex problems." arXiv preprint arXiv:2005.07360. 2020.
---
Rebuttal Comment 1.1:
Comment: I acknowledge that I read the author's rebuttal and based on other reviewer's comments, I intend to keep my current score. | Summary: This work analyses the rate of SGD and other adaptive SGD methods in reducing $\\mathbb{E}\\|\\nabla f(x_t)\\|$, where $f$ is non-convex $L$-smooth, and shows that SGD has an exponential dependence on $L$ when the stepsize is not properly tuned, while other adaptive methods do not incur this exponential dependence.
Strengths: The analysis presented in this work is fresh and interesting. The authors make a compelling case that removing the bounded gradient assumption is perhaps essential in revealing a crucial advantage of adaptive SGD methods over standard SGD.
As I explain in the weakness section, I think the qualitative idea behind the described phenomenon is known. However, the quantitative results are, as far as I know, new and, in my opinion, very interesting. I think the qualitative understanding gained by the quantitative analysis is valuable, so I vote for the paper to be accepted.
Weaknesses: At a high level, one can argue that the result was "known" in the following sense.
When a LR schedule of $\\eta_k=\\eta/\\sqrt{k}$ is used, $\\eta_k$ will be large for small $k$. It is known that SGD exhibits divergent behavior when the stepsize exceeds $2/L$, so SGD will initially diverge. The worst-case amount of divergence should be exponential in $L$. Once the $\\eta_k$ diminishes to a level below the divergent threshold, then the algorithm needs to recover from the initial exponential divergence, hence the rate.
On the other hand, adaptive methods should not exhibit such divergent behavior (at least not to an exponential extent). Therefore, there is no initial exponential divergence, so there is no catch-up to play at the later stage of the algorithm.
I think it would be worthwhile for the authors to discuss the view that the exponential constant corresponds to the amount of catch-up SGD needs to make.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is the line of reasoning the authors lay out specific to non-convex SGD? It seems to me that a similar type of exponential dependence could be shown for the smooth-convex setup.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the recognition of our work.
> **At a high level, one can argue that the result was "known" in the following sense... I think it would be worthwhile for the authors to discuss the view that the exponential constant corresponds to the amount of catch-up SGD needs to make.**
We agree that this is an intuition behind our results. We note that SGD with a constant stepsize may diverge exponentially, but it is less apparent that an exponential term persists even with a _diminishing_ stepsize, while simultaneously achieving an order-optimal convergence.
Furthermore, it is noteworthy that the exponential term is _multiplied_ by the optimal rate in both our upper and lower bounds,
which is nontrivial.
In general, our results reveal that a strict advantage of adaptive methods is removing the exponential factor, not achieving an order-optimal rate parameter-agnostically as commonly perceived -- an insight that might seem intuitive in hindsight but to our knowledge has not been formally proved in the existing literature -- and we believe it is a valuable one that deserves recognition.
> **Is the line of reasoning the authors lay out specific to non-convex SGD? It seems to me that a similar type of exponential dependence could be shown for the smooth-convex setup.**
We appreciate the reviewer's insightful question. Our focus in this paper is on the nonconvex setting because most of the motivating examples are in the nonconvex regime.
After some preliminary study, we believe that the upper bound analysis can be extended to the smooth convex setting. Analogous to the proof of Theorem 1, the analysis can be divided into two stages: the first stage is when the stepsize is still larger than $1/\ell$; the second stage is when the stepsize has decreased to be small enough. Let $\tau$ be the first iteration when the stepsize is less than $1/\ell$. In the first stage, by employing techniques similar to those in Theorem 1 and leveraging convexity, we can bound $\\\|x\_{\tau} - x^\*\\\|^2$ by an exponential term.
The term $\\\|x\_\tau - x^\*\\\|$ then serves as an initial distance for the second stage, and we can analyze the convergence in this stage with
the classic analysis in the stochastic convex setting, leading to a complexity of $\mathcal{O}(1/\epsilon^2)$ for finding an epsilon optimal point (i.e., $f(x) - f^\* \leq \epsilon$). Combining two stages together leads to a total complexity of $\epsilon^{-2}$ multiplied by an exponential term.
Our original nonconvex hard instance for the lower bound does not apply here. However, we may construct a simple instance in the convex setting: a two-dimensional additive example such as $\ell x\_1^2 + \lambda x\_2^2$. This instance results in a lower bound of the order $\exp{(\ell^2)}\log^2\epsilon^{-1} + \epsilon^{-4}$ for finding an $\epsilon$-stationary point or $\exp{(\ell^2)}\log\epsilon^{-1} + \epsilon^{-2}$ for finding an $\epsilon$-optimal point. We note that the highest-order term in $\epsilon$ is not multiplied by the exponential term. Therefore, we suspect there is a small gap with this simple hard instance, making it an interesting future direction to explore. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback and the overall positive evaluation of our work.
As requested by Reviewers CaFA and vyoV, we conducted additional experiments with deep neural networks to demonstrate the gradient explosion effect on a more practical (large scale) experiment. In particular, we evaluated the performance of untuned SGD and several adaptive variants on the CIFAR-10 (Krizhevsky et al., 2009) dataset using a 50-layer ResNet (He et al., 2016). The observations are consistent: with a larger initial stepsize, SGD tends to first experience an exponential blow-up before eventually converging as the stepsize decreases sufficiently, while adaptive gradient methods are robust to the stepsize change. We kindly refer to the attached PDF below for more details.
If the reviewers have further questions, we will be happy to address them in the discussion phase.
**References**
- Krizhevsky, Alex, et al. "Learning multiple layers of features from tiny images." 2009.
- He, Kaiming, et al. "Deep residual learning for image recognition." CVPR. 2016.
Pdf: /pdf/87d1e89aa79bfb338dc418f903487cf8ff7106e4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The article under review presents results to show that untuned SGD may be less adapted than normalized versions of it when solving smooth non-convex problems.
In order to prove this, the authors present several results, from upper to lower bounds on smooth non-convex problems to find a critical point, without the knowledge of the smoothness parameter. They show that while SGD may suffer from the non-adaptivity of the step-size, its normalized version get rid of this problem.
Strengths: - The paper's description of the problem it wants to tackle is good, and the questions addressed are well introduced.
- While I find it a weakness too (see below), it is remarkable that the authors present both lower bounds and upper bounds on a lot of different settings.
- The figures summarize well the main idea of the negative result on untuned SGD and the principle of the lower bound
- The table helps to navigate in this hairy paper
Weaknesses: *Main comments*
While the problem asked in the introduction on the difference between SGD and Adam (and its many variants) is an important problem where almost nothing is known, I find the answer of this article non-really convincing:
- The main phenomenon pinpointed by the authors is the presence of the constant $e^{\eta^2 \ell^2}$ in the bound to find a critical point: this factor is due to the fact that, initially, the step size is too big compared to the local curvature of the function while after some time, the step-size being a decreasing function $\eta_t = \eta/\sqrt{t}$, the step-size becomes well-conditioned. While some similar phenomenon may take place for some learning problems, I am not sure that such an analysis is the crux of the problem for the comparison between Adam and SGD. Maybe some experiments on non-toyish problem might help convince the reader (or at least myself): do we really see this $e^{\eta^2 \ell^2}$ popping out and eventually really slows down the convergence?
- The article claims to address, for practical purposes, and improve the theory from *bounded stochastic gradients* to *bounded variance* of the stochastic gradient: surely, theoretically it is a nice contribution, but it does not really serve to give an answer to the claimed question. Furthermore, even in this case, I think that the set-up is still not valid for the simplest least square case… so it does not seem to me as an incredible update.
- Finally, as stated in the strengths paragraph, the article addresses a lot of different setups, with different algorithms, sometimes stochastic, sometimes deterministic and it is very difficult to understand the true contribution of the article if the authors do not pinpoint them. Sometimes the reader, or at least I, was completely lost in what was known, both it terms of technique and/or result.
*Minor comments*
- Theorem 1: $\Delta$ is not defined. I am surprised that there is no problem when $\eta$ is too big (no upper bound on $\eta$, and it does not diverge!), but this may be an artifact on the bounded variance stochastic gradient assumption.
- Theorem 2: Good lower bound. I like it together with the illustration, as we understand well the phenomenon. However, why not proving it for SGD? This is only a technical limitation I guess.
*Final precaution.*
Overall, I have to say that I do not come from the community that analyses the general convergence of SGD for non-convex problems and its many normalized variants. Hence, it is hard to see what is the reel contribution of the authors.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Already said in the paragraph above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Already said in the paragraph above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments!
> **The main phenomenon ... Maybe some experiments on non-toyish problem ...**
Thank you for suggesting additional experiments. We have now included a large-scale experiment on a widely-used 50-layer ResNet trained on the CIFAR-10 dataset and observed similar phenomenon. With a larger initial stepsize, SGD tends to first experience an exponential blow-up before eventually converging as the stepsize decreases sufficiently. In contrast, adaptive gradient methods are robust to the stepsize change.
Please find the experimental results in the uploaded PDF of the general response to all reviewers.
We believe these experiments are representative and well support our theoretical findings on the benefits of adaptive methods over SGD.
> **The article claims ... not really serve to give an answer to the claimed question ... the set-up is still not valid for the simplest least square case … it does not seem to me as an incredible update.**
Bounded variance is a standard assumption in the literature (Ghadimi et al., 2013; Zaheer et al., 2018; Arjevani et al., 2023), which we believe is a good starting point to study the benefits of adaptive gradient methods over SGD. While this assumption may not cover all practical ML applications, we note that it holds for many important examples. For instance, bounded variance assumption has been confirmed in certain deep learning tasks where stochastic gradient noises exhibit Gaussian-like patterns (Xie et al., 2020). In contrast, the bounded stochastic gradient is a much more restrictive assumption and does not hold even for Gaussian noises. From a theoretical standpoint, removing bounded gradient assumptions for adaptive methods is an important and active research topic (Faw et al., 2022; Kavis et al., 2022; Wang et al., 2023). Therefore, we believe that our efforts in moving beyond this assumption are a significant contribution.
Furthermore, we expect our results can be extended to more relaxed assumptions, such as the expected smoothness (Khaled et al., 2022), defined as $\mathbb{E} \\|g(x, \xi)\\| \leq 2A (f(x) - f^\*) + B\\| \nabla f(x) \\|^2 + C$ for some constants $A, B, C \geq 0$, which accommodates the least square problem suggested by the reviewer.
> **Finally ... it is very difficult to understand the true contribution of the article ... I, was completely lost in what was known ...**
We investigate different algorithms under different setups in order to provide a comprehensive and sound comparison between untuned SGD and adaptive methods. Collectively, these results reveal that a strict advantage of adaptive methods is removing the exponential factor, not achieving an order-optimal rate parameter-agnostically as commonly perceived -- an insight that we formally prove in this paper.
As illustrated in Table 1, all entries in cells without accompanying theorem references are new results.
We would like to highlight that the following are the most pivotal contributions of our paper:
- For SGD, when using diminishing stepsizes, it can converge without knowing $\ell$. This convergence, however, comes with the cost of an exponential dependency on $\eta \ell$. We derive corresponding upper (Theorem 1 and 6) and lower (Theorem 2) bounds.
- For AMSGrad-Norm, we provide convergence analysis for the deterministic setting with both upper and lower bounds (Theorem 5 and 8).
In the stochastic setting, we show that its convergence can be arbitrarily slower than any polynomial rate (Theorem 4).
- For Normalized SGD, we establish its nonconvergence in the stochastic settings with any stepsize (Theorem 3).
The rest of our results are also novel and help support our key insight and build a complete picture. Following the reviewer's comment, we will refine the paper's structure and enhance its clarity in the new version.
> **Theorem 1: $\Delta$ is not defined ... there is no problem when $\eta$ is too big ... this may be an artifact on the bounded variance stochastic gradient assumption.**
The symbol $\Delta$ is defined in line 148 as the initial function value gap.
The convergence with a large $\eta$ is not due to the assumption of bounded variance but rather the use of diminishing stepsizes.
When $\eta$ is excessively large, SGD will initially diverge and then converge once the value of $t$ becomes large enough to render $\eta/\sqrt{t}$ small.
The initial divergence behavior is encapsulated by the exponential dependence of $\eta\ell$ in the upper bound, i.e., $(4e)^{2 \eta^2\ell^2}$, which is the cost of excessively large initial stepsize.
> **Theorem 2: Good lower bound ... However, why not proving it for SGD? ...**
Thank you for recognizing the value of our lower bound.
We note that the lower bound for GD is a stronger result. As GD is a special case of SGD when the gradient noise is reduced to zero, the lower bound in Theorem 2 also works for SGD. The lower bound already matches our upper bound in Theorem 1, implying that the lower bound is also tight for SGD.
**References**
- Ghadimi, Saeed, et al. "Stochastic first-and zeroth-order methods for nonconvex stochastic programming." SIAM Journal on Optimization. 2013.
- Arjevani, Yossi, et al. "Lower bounds for non-convex stochastic optimization." Mathematical Programming. 2023.
- Zaheer, Manzil, et al. "Adaptive methods for nonconvex optimization." NeurIPS. 2018.
- Khaled, Ahmed, et al. "Better Theory for SGD in the Nonconvex World." TMLR. 2022.
- Xie, Zeke, et al. "A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima." ICML. 2020.
- Faw, Matthew, et al. "The power of adaptivity in sgd: Self-tuning step sizes with unbounded gradients and affine variance." COLT. 2022.
- Kavis, Ali, et al. "Adaptive stochastic variance reduction for non-convex finite-sum minimization." NeurIPS. 2022.
- Wang, Bohan, et al. "Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed Assumptions." COLT. 2023.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal and for the additional experiments. The authors have discussed my concerns but not really addressed my main ones: notably the significance of their result to help understand the behavior of gradient-adapted step-sizes.
While the additional experiments show the blow-up phase due to the large step-sizes at first, I wonder whether this type of learning curve really happen in practice: this is the first time I see personally an initial blow up.
For these reasons, I decide to maintain my score.
---
Reply to Comment 1.1.1:
Comment: We would like to emphasize the significance of our findings regarding the behavior of SGD versus adaptive methods and provide comments on the initial blowup.
**Significance of Our Results:** Adaptive methods are frequently observed to converge fast and adeptly avoid large gradients, compared to SGD. Yet, there is a notable gap in the theoretical foundation supporting these observations, as we discussed in the "SGD vs. adaptive methods" section of our "related work." Our study stands out as one of the first to provide a quantitative analysis of this phenomenon, differentiating the sample complexities of untuned SGD from adaptive methods. Moreover, it aligns with several existing intuitions in the literature [23, 54, 26] that using gradient norms to rescale the stepsize can mitigate gradient explosion.
**Initial Blowup:**
The pattern of an initial blowup followed by convergence is less often highlighted in published works for a couple of reasons: (1) Most papers present results using the best-tuned stepsize. Typically, if practitioners notice an increasing gradient initially, they switch to a smaller stepsize. (2) The magnitude of the gradient explosion can sometimes surpass numerical limits, preventing the observation of subsequent convergence. This phenomenon is consistent with the exponential term in our theory and is also evident in figure (d) of our additional experiment. However, this behavior is not absent from the literature. For instance, it is documented in the following works:
- The first and third subplots of Figure 2 in [Agarwal et al., 2022], in which BERT was trained using SGD and Adam.
- Figure 2 of [Moulines et al., 2011], where they used a qudratic toy examples the same as the one utilized for the left subplot of our Figure 1.
**References**
- Agarwal, Naman, et al. "Learning Rate Grafting: Transferability of Optimizer Tuning." 2022 (https://openreview.net/forum?id=FpKgG31Z_i9).
- Moulines, Eric, and Francis Bach. "Non-asymptotic analysis of stochastic approximation algorithms for machine learning." NeurIPS 2011. | null | null | null | null | null | null |
Asynchronous Proportional Response Dynamics: Convergence in Markets with Adversarial Scheduling | Accept (poster) | Summary: This paper studies asynchronous proportional response dynamics (PRD) in linear Fisher markets under adversarial scheduling. The authors proposed an associated game with specific player utilities which admits an exact potential function. Then, the authors show that the set of pure NE of the associated game is the same as the set of market equilibrium bids, which is also the same as the set of maximizers of the potential function. Next, the authors show that the best response dynamics (BRD), where a single player is activated in each round, converges to the equilibrium prices. In terms of PRD with subsets of active players in each round, the authors discussed its connection with BRD and showed that PRD strictly increases the potential function value unless there is no update in bids. Finally, they show that a “generic” linear Fisher market (i.e., no multiplicative equality/degeneracy) exhibits unique equilibrium bids. With the above developments, and through mathematical analysis arguments, the authors prove the main theorem: if the market is generic, and if players are activated in subsets (arbitrarily, but each at least once in every $T$ rounds), then PRD converges to the unique market equilibrium.
Strengths: - The authors presented many interesting results that will likely be key foundational results for future research in game and market dynamics.
- These results are presented clearly with highly informative proof sketches and clear connections to other results.
Weaknesses: None that I could think of. See **Questions**.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: For a non-generic market, does async PRD lead to price convergence, that is, $p^t = \sum_i b^t_{ij} \rightarrow p^*$? Can this be stated as a corollary somewhere in this work? Either way, it would be good to point this out (immediate, require some derivation, or still unclear).
Editorial suggestions.
- Line 236: $\Phi$ undefined (yet)?
- Line 329: Extra “a”.
- [nit] I would use "limit point" instead pf "condensation point" when discussing the convergence of bids, as the latter is slightly uncommon, although I believe it's used in Rudin's textbook.
- Appendix Line 316: “works” should be "work"? The authors are discussing the implications of multiple previous lemmas.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N.A.
More details:
- This work do not have (hidden, unstated) limitations. All claims in the abstract have been addressed in the work.
- This work is on theoretical properties of (variants of) well-konwn market dynamics and do not have immediate or potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Regarding the question about price convergence in non-generic markets: We believe that you are correct and prices do converge in general for PRD (as we did show for best-reply dynamics). However, we were not able to prove this with our current techniques and so the status is still unclear. We will point this out in the paper.
Editing comments: Thank you for these helpful comments, we will incorporate them into the paper.
---
Rebuttal Comment 1.1:
Comment: Great, thanks for the response! | Summary: In this paper, the authors examined Proportional Response Dynamics (PRD) in linear Fisher markets in a setting where participants act asynchronously. In particular, they considered a setting where at each step, an adversary selects a subset of players to update their bids. The paper showed that in the generic case, if each bidder individually uses the PRD update rule when included in the selected group, then the entire dynamic converges to the market equilibrium. As part of their proof, they have also established other properties such as uniqueness of the market equilibrium and the convergence of best-response dynamics in an associated game.
Strengths: 1. This paper studies an interesting setup of the linear Fisher market where the activation can be asynchronous.
2. The theoretical results of this paper appear quite sound. The authors adopted novel proof techniques; in particular, they established an important connection between the associated game and the original game, which helped prove the convergence of asynchronous PRD.
Weaknesses: 1. One thing that appears missing from the current paper is the motivation behind studying the linear Fisher market with asynchronous PRD. It is essential to provide a rationale for studying this particular setting and justify why it is important. Since the authors only allow an intermediate level of allowed asynchrony, it becomes even more crucial to justify why this specific setting is worth investigating.
2. The organization of the paper is unclear, especially regarding the relationship between the results presented in Sections 3-5 and the proof of Theorem 1. I find it a bit difficult to follow the flow and understand which results contribute to proving Theorem 1 and which are significant on their own. Additionally, the role of Section 4 is unclear to me in the overall discussion of the paper (it seems to me that this section investigate some property of the associated game; but I don't see how it contributes to other results).
3. The definition of the "generic case" needs more clarity. It would be helpful to provide more explanation as to why this assumption is necessary and what it signifies in the context of the paper.
4. The authors mentioned that the convergence of PRD under the full asynchrony model remains unclear. It would be beneficial to specify the main challenges associated with achieving convergence in this model. Additionally, it'd be good if the authors could elaborate on their conjecture that convergence occurs if information delays are bounded.
5. If the convergence of PRD is shown by the potential function of the associated game being strictly increasing, is it true that the convergence could be arbitrarily slow? This might be related to the second open question (i.e., no speed of convergence result is provided).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weaknesses.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address below the specific points raised in the review.
Motivation: Natural dynamics in markets (such as the PRD that we study) can be viewed as the aggregate emergent outcome of joint simple learning strategies of the participants. We believe that studying the emergent outcome of multiple interacting learning agents is an important and timely research area in machine learning, both theoretically in general and for economic applications. We do not suggest that we have a concrete application for this particular model but rather that we are advancing the theoretical study of such dynamics which are of significant general interest.
Organization of the paper: Thank you for this comment. We will try to improve and clarify the organization of the paper further.
Generic markets: The genericity condition is described in the introduction and formally defined in Definition 3. Intuitively, this means that there are no degeneracies in the market parameters (such as multiple values that are exactly equal), and we show that the absence of such degeneracies implies the uniqueness of the equilibrium. This in turn allows showing convergence to a point. Additionally, as mentioned in the conclusion, if there is some randomness in the process generating these parameters, the market would be generic with high probability. We will add further clarification in the introduction and include a reference to the formal definition.
Main open problem: A significant challenge in the full asynchrony model is that one cannot directly employ potential-function arguments, as it may no longer be true that the standard potential function improves in every step. The motivation behind the conjecture that convergence still occurs when information delays are bounded is that the bids update in small steps. Thus, there is hope that over an epoch (a series of updates where everyone has updated their bids at least once), the potential will improve. However, we have not analyzed this model in this paper and leave such an analysis for future work.
Speed of convergence: As we discuss in the paper, our current techniques allowed us to prove convergence with asynchronous bid updates but not the speed of convergence. We actually suspect that convergence is in polynomial time (in the problem parameters and 1/epsilon), but such an analysis may require some new ideas and we view it as an interesting and natural direction for further work. | Summary: The paper studies the convergence of Proportional Response Dynamics in linear Fisher Markets.
Fisher Markets are markets consisting of m divisible goods that should be shared among n agents with a linear utility function on items. The market not only must to decide the allocation, but it must also assign a payment to each agent for the fraction of each good she receives. We want that all items are sold (market clearing), that no agent spends more than her own budget (budget feasibility), and we are willing to allocate items so that there is no alternative market clearing and budget feasible allocation that an agent would prefer to the returned allocation (equilibrium).
It is known that a market equilibrium exists and can be computed in polynomial time by a centralized algorithm. Moreover there is a decentralized algorithm (known as tatonnement dynamics) that enable agents to quickly converge to the equilibrium, even if an adversary can choose at each time step which non-empty subset of agents would apply the dynamics update rule (subject to some fairness constraint).
In this work instead it is considered another dynamics (proportional response dynamics) that has been proved to converge when all agents update at each time step, but it was unknown whether this convergence property holds even against the above described adversary. This paper addresses this problem, by providing a positive answer.
The result is proved by observing that PRD for linear Fisher markets are equivalent to best response dynamics for a suitable potential game.
Strengths: The paper is well written and presentation is very well-articulated and clear.
While the result re-uses some ideas previously established, it built an original framework to prove the main theorems on top of these ideas.
Weaknesses: My first doubt about this paper is the relevance of this paper for this venue. While Fisher markets have been a very successful topic in economical and game theoretic literature, and computational aspects related to these markets have been of interest for theoretical computer science, these markets have not attracted very much the attention of AI community, maybe for the scarce practical applications (this can be seen also from the reference list of the paper). The paper does not make any effort to justify the study of these market within this community, neither it provides relevant applications.
My second doubt is about the relevance of these results. Why should we interested in proportional response dynamics if there is already a distributed dynamics that is known to converge to equilibrium even in an adversarial setting, that does it quickly, and that is a quite natural dynamics? Is there some motivation behind PRD that is not common also to tatonnement? The paper cites about similarity to a learning approach. Why this cannot be said also about tatonnement? And why should we interested to PRD convergence without bound on convergence time whenever we know that another dynamics converges quickly?
Based on this last comment it would be interesting to see in the experimental session a comparison about convergence of PRD and convergence of tatonnement, that is instead absent.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: Partially
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address below the specific points raised in the review.
Relevance: While we agree of course that our analysis is theoretical, we believe that the paper is relevant to NeurIPS: Natural dynamics in markets (such as the PRD that we study) can be viewed as the aggregate emergent outcome of joint simple learning strategies of the participants. We believe that studying the emergent outcome of multiple interacting learning agents is an important and timely research area in machine learning, both theoretically in general and for economic applications. Additionally, we note that the NeurIPS call for papers this year explicitly includes "Algorithmic Game Theory" in its list of topics.
Motivation for exploring PRD: We agree, there is no doubt that tatonnement may also be viewed as a type of multi-agent learning, although with "goods players" responding to the excess demand rather than bidders responding to prices. Both models are interesting and have been widely studied. We believe that there is still much work left to be done on both dynamics, especially considering the perspective mentioned above of emergent behavior by multiple interacting learning agents.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer.
I need a further clarification on the second point: Do you believe there is a reason (either psychological or computational, or whatever else) for which PRD makes more sense than tatonnement in Fisher markets?
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. PRD and tatonnement describe different market behaviors, and so both are of interest as two alternative models. In particular, it has been argued in the literature that PRD has features of efficiency and simplicity that make the PRD model particularly interesting for the Fisher market model: Unlike tatonnement, in PRD, all goods are cleared at every step of the dynamic. Secondly, the buyers in PRD do not need to solve an optimization problem at every step to determine their strategy in the next step. Instead, they simply divide their budget proportionally based on their last observed gains, and they do not need any parameters requiring tuning to do so.
As we mention (e.g., in the introduction and related literature section), both tatonnement and PRD are considered as important dynamics in Fisher markets, and both have been broadly studied, and specifically, the question that we analyze of asynchronous PRD has been noted by several authors as an open problem. We believe our analysis provides novel and relevant theoretical results on this open problem, and it also raises directions for further work, which we discuss. Following your question, we will try to clarify further the differences between tatonnement and PRD in the paper to make this motivation clearer. | Summary: This paper studies the problem of convergence in Proportional Response Dynamics (PRD) in linear Fisher markets when participants update in the dynamics under adversarial scheduling, i.e. an adversary specifies which subset of agents update their dynamics in a given round, subject to the constraint that each agent must be activated once every $T$ rounds at least. By leveraging auxiliary games and potential functions, it is shown that these asynchronous dynamics converge in generic settings; moreover, this analysis and connections also show that other natural dynamics also converge.
Strengths: This work makes substantial progress on an open problem on the convergence of adversarially scheduled PRD. En route to this result, this work also derives implications for other dynamics by exploiting new structural properties of auxiliary games they consider for the analysis. In general, this paper is quite well-written and the arguments in the main text are well-explained.
Weaknesses: While this work makes substantial progress, it feels like attaining rates of convergence should be achievable under some adaptation of the (seemingly new) arguments provided here; however, the current analysis relies on compactness-style arguments so do not seem directly amenable to answering this question.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ---While I am not an expert on this particular setting, all in all, this paper seems to make significant progress on an open problem in convergence of PRD. Moreover, the analysis appears both conceptually clean while novel (or rather, a nice twist on an existing approach), which may lead to further progress towards the remaining open problems highlighted in this work. I would defer to other reviewers on the significance of these results.
---The related work section and discussion appeared quite extensive, though I (again) am not an expert on this particular line of work.
---I could not verify all the details in the Supplementary Material; however, the arguments sketched in the main text seemed to make reasonable sense.
---Towards making progress on the main open problem posed in this work (i.e. total asynchronicity with delays), are there intermediate versions of this that may be amenable to a similar analysis? For instance, if there are stochastic or deterministic (i.e. nonadversarial) delays?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Regarding the question about intermediate asynchrony models, we agree that intermediate levels of asynchrony with information delays are an interesting avenue to explore which can be useful for making progress in (or at least gaining insights on) the analysis of full asynchrony. Considering stochastic delays is an interesting direction that we have not yet explored. Another approach to consider when examining intermediate models is to start the analysis with very limited adversarial information delays (e.g., delays of a single step). We believe that it might be possible to extend our methods to handle such adversarial delays, but we have not pursued this route in the present paper and leave this analysis for future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response (and sorry for the delay)! I have no further questions at this time and will leave my score as is for now. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable feedback, we will use it to improve the paper. We reply to specific points of each reviewer separately. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Summary: The paper studies the Fisher market model, where there is a set of m sellers and n buyers. Each seller brings a unit of a divisible commodity for sale and each buyer brings a budget. The vendors value the money while the buyers value the commodities (goods). A substantial amount of research has been done to develop methods for computing market equilibria and to understand the computational complexity of this problem.
This paper makes progress on understanding a well known type of dynamics called proportional response dynamics, where the market starts in some initial configuration and evolves over time. The agents modify their bids based on past engagements with other agents and former market state(s). Specifically, in proportional response dynamics, each agent adjusts their bids relative to the utility value of the goods in the preceding round. Previous studies demonstrated that proportional response dynamics gravitate towards market equilibria in diverse scenarios; these dynamics are also interpretable.
Synchronous dynamics, where the players adapt their strategies with the same speed were studied extensively in the past work, but the more realistic asynchronous setting is less well understood (and before this paper not understood at all for the family of proportional response dynamics). The contribution of the paper is to study asynchronous proportional response dynamics in a very general setting where at each step a subset of players, adversarially chosen, update their strategies, with the constraint that each player has some minimum frequency of responding.
The main results of the paper are:
(1) For generic linear Fisher markets, proportional response dynamics with adversarial activation asynchrony, where each player is activated at least once every T steps, converge to the unique market equilibrium.
(2) The paper also finds a connection between the Fisher market and an associated game, such that the set of market equilibria of the Fisher market are the same as the set of Nash equilibria of the associated game and, furthermore, the same with the set of points that maximize a potential function \Phi (for which synchronous proportional response was shown in prior work to behave as mirror descent on this function). The paper shows that every proportional response step by any subset of players increases the potential function \Phi.
Evaluation: The paper makes several nice and significant contributions in a fundamental setting, contributing to the development of a theory of markets with learning agents. Such dynamics capture settings such as the stock market, which evolve over time. The paper raises intriguing questions for further study, such as how far asynchrony can be pushed, quantifying the rate of convergence depending on the extent of asynchrony, and investigating the correspondence identified in the paper between the proportional response dynamics and the associated game.
Various comments:
Page 2, line 60: “intermdiate” level of asynchrony -> intermediate
Page 6, line 236: "\Phi is it’s potential" -> its potential
Page 7, Lemma 1: "for all" should be capitalized -> For all
General suggestion -- some long inequalities or inequalities with fractions of sums are inlined (see e.g. page 5 line 205 in the main text, or page 3 in the appendix line 92 or page 5 in the appendix) and they are harder to read for this reason, it would be nicer to use displaymath or another environment like that. There are also some long paragraphs (e.g. when introducing the Fisher market) which could be divided into several shorter paragraphs for improved readability.
Strengths: + The paper makes several nice and significant contributions in a fundamental setting.
+ The paper is clearly written and of interest to researchers working in the space of games and learning.
+ Raises good questions for future work, such as how far asynchrony can be pushed, quantifying the rate of convergence depending on the extent of asynchrony, and investigating the correspondence identified in the paper between the proportional response dynamics and the associated game.
Weaknesses: - Rate of convergence is not shown.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: There is something that I don't understand and may be missing. Does the associated game have a compact strategy space? For instance if all players bid zero on a good, then the price of that good is zero. The best response is not well defined at such a strategy profile, as there can always be an improved response by a player - for instance, if the price of a certain good is zero and the player finds the good appealing, the player could bid 1 to acquire all of the good, but then bidding 1/2 would still enable them to secure the entire good while leaving some budget to increase the bid on other goods, and so forth. Essentially, the game seems to exhibit discontinuities at the strategy profiles where the price of a certain good is zero. Is Jensen's theorem applied to this game? If so, how does it work? (specifically, how does condition (1) in Jensen paper play out).
Note there are theorems that deal with games with discontinuous payoffs (e.g. https://link.springer.com/article/10.1007/s00199-015-0934-3 and https://www.jstor.org/stable/41237788).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have acknowledged the limitations of the paper and explicitly stated the open problems remaining to be solved, including analyzing the rate of convergence, studying even stronger degree of asynchrony, and so forth.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Yes, the strategy space is compact, it forms the full polytope where each buyer allocates its budget arbitrarily among the items. While you are correct that the best-reply function with respect to the standard utility function is formally undefined at zero, what we need is for the best-reply function to the *associated* utility to be continuous, and this is indeed the case. We note that the best reply to the associated utility function does not imply minimizing the bid on a good for which all the others bid zero, but rather there is a finite bid that equalizes the bang-per-buck. We will add a clarification about this in Section 4 and emphasize this further in the proof of Theorem 4.
Editing comments: Thank you for these helpful comments, we will incorporate them into the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification, I don't have further questions. | null | null | null | null | null | null |
Curriculum Learning for Graph Neural Networks: Which Edges Should We Learn First | Accept (poster) | Summary: This paper proposes a novel curriculum learning method that progressively includes edges into training based on their difficulty level, starting from easy to hard. The difficulty level is determined by the expected model performance in predicting the edges.
Strengths: (1) Graph representation learning is a very fundamental problem for graph-related problems, and designing curriculum learning strategy from edge perspective is a very interesting topic.
(2) The paper is well-organized and easy to be understood.
(3) Extensive experiments analysis is informative for the readers.
Weaknesses: (1) Since the initial node embedding heavily relies on the quality of the encoder and the effectiveness of reconstruction loss, it is advantageous to initialize the training process using a pre-trained GNN encoder. Since the evaluation of edge difficulty in the paper is based on the difference between the reconstruction matrix and the ground-truth matrix, which leads to significant shift in the early stages of training from scratch. It is highly recommended to incorporate a pre-trained GNN encoder as an initialization step, which would be more reasonable.
(2) Since the main training strategy in the paper is curriculum learning, it is suggested to compare the impact of different pace functions on performance.
(3) As existing research suggests that curriculum learning is effective for noisy data, it would be valuable to validate the proposed method's effectiveness on different types and proportions of label rates.
(4) The paper lacks some recent literature on the latest advancements for graph neural networks or curriculum learning.
1. Curriculum Graph Machine Learning: A Survey. 2023
2. A Comprehensive Survey on Deep Graph Representation Learning. 2023
3. Graph Neural Network with Curriculum Learning for Imbalanced Node Classification. 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the aforementioned weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No specific concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
Q1. 'Since the initial node embedding heavily relies on the quality of the encoder and the effectiveness of reconstruction loss, it is advantageous to initialize the training process using a pre-trained GNN encoder...'
A1. Your suggestion aligns with the approach we have implemented in our experiments. As described in lines 297-305, a section titled 'Initializing graph structure by a pre-trained model' details our use of a pre-trained GNN to initialize edge difficulty measurement.
Additionally, our ablation studies in Appendix Table 4 showed the performance difference when no pre-trained GNN model is employed. The results validate the beneficial impact of a pre-trained GNN on improving model performance by providing initial structure.
Q2. 'Since the main training strategy in the paper is curriculum learning, it is suggested to compare the impact of different pace functions on performance.'
A2. We indeed compared the impact of different pace functions on performance. In Appendix Table 4, we contrast our method with two commonly utilized pacing functions—linear and root. The results demonstrate that our designed pacing function can outperform these two competitive alternatives.
Q3. 'As existing research suggests that curriculum learning is effective for noisy data, it would be valuable to validate the proposed method's effectiveness on different types and proportions of label rates.'
A3. In Section 5.3 (refer to Figure 2), we conducted robustness tests against noisy edges, reflecting our primary focus on designing an appropriate curriculum on edges.
We introduced noise into the graph by randomly incorporating noisy edges, ranging from 10\% to 100\% of the original edges. The findings reveal that our RCL model effectively alleviates the performance drop, reducing it by over 50\%. This demonstrates the robustness of our proposed curriculum learning technique when dealing with noisy scenarios.
Q4. 'The paper lacks some recent literature on the latest advancements for graph neural networks or curriculum learning.'
A4. We will include the recently related literature in our later revision.
---
Rebuttal Comment 1.1:
Comment: The author's response partially addressed my concerns, and I will maintain my score. | Summary: The paper presents a curriculum learning strategy that works in the node classification setting. The key property in the node classification setting is that edges are not necessarily independent. The paper proposes a way to perform curriculum learning for this task, including the edges from easy to hard based on the distance of node embeddings (a model is trained on a subset of edges, then the easiest k edges are selected for the next round). The suggested method outperforms a number of GNNs in the area of graph structure learning which try to jointly learn the graph structure (removing noisy edges) and node embeddings. In particular, the suggested method is explicitly robust to random noise.
Strengths: - a bottom-up curriculum strategy for node classification
- fully neuronal, not much of a heuristic (only the way the number of edges to add in the next round is some kind of a heuristic)
- The improvement in terms of robustness against noise is quite strong
- General improvement in terms of classification accuracies is nice (although modern and strong GNNs are missing both as comparison and backbone)
- adding the confidence of individual embeddings into the helps the model focussing on edges that are indeed "easy"
Weaknesses: My major concern about the paper is that the experiments only use very basic algorithms (GCN, GIN, and GraphSAGE) as backbones for the proposed method. While I hope that the results carry over to modern GNNs, it would make sense to actually test that explicitly. My suggestion is to use PNA here as it is still a "standard" MPGNN without too fancy additions.
The second major criticism are the baseline methods used in the experiments. I believe that the task that is to be solved is still "just" node classification and thus all kinds of GNNs could serve as baseline, not only those that also try to remove noise. This holds for both the normal experiments as well as the experiment on robustness against noise (there is nothing adversarial about adding random edges). In addition, I believe that existing CL methods are worth comparing to, even if they assume an independence that definitely does not hold - the benefit of RCL should be even more pronounced for that case (and I would like to see that).
Further unsorted points I noted about the paper:
* the fairness criterion does not really feel "fair" to me - optimizing each method's hyperparameters individually would be preferred
* it is unclear how the decoder is trained
* Algorithm 1, Line 6 is very hard to read
* I did not quite get how the number of edges in $A^0$ is determined
* It would be nice to note in the introduction that for graph learning (in contrast to node classification) the independence criterion of other CL methods holds. Just for the node classification task it is not applicable.
* It could be clearer what the difference to existing CL methods for node classification is - those are just mentioned to all perform "independent" CL even though targeting node classification
* 48: in how far is training unsupervised here? As far as I understood the model, the CL process is unsupervised but the overall training still needs supervision.
* 133: was the numerically stable process really a challenge in practice?
* To me the ablation study in the appendix was more interesting than Table 1. I believe that table 2 suffices to show the effectiveness of the method.
* Section 5.3: this is robustness against noise. There is no adversary here that selects the edges. It does not make the experiment less important. Here, an additional baseline using standard (modern) GNNs would be highly appreciated.
* The "adjacency" of classes sounded odd. Are the classes in the synthetic data really ordered and comparable? (i.e. 1 and 10 are further apart than 3 and 4)
* How strong/important is the given theorem? Is it surprising? Please add one more sentence about it other than that the proof is in the appendix. Is the convergence really that challenging?
Writing:
* please make sure that all figures are indeed vector graphics
* related work: please name the authors when using a paper as subject in a sentence
* The paragraph on initializing (the) graph structure by a pre-trained model exists both in the paper and the appendix.
* 134ff: this paragraph does not really add a lot of new information, nearly all of that was said before.
* Tables 1 and 2 should both have a very first column that indicates that the three blocks are the three backbones. (I would rotate the column by 90 degrees then it should still fit)
* Table 2: maybe exchange (-) by OOM to make clear what the problem was. Also specify that the available GPU memory was 48GB.
* Eq1: w is undefined and not used.
* I believe that mentioning the general speed of the method (with a pointer to the table in the appendix) would be benefitial.
Typos:
quite a number of "the" is missing, some appear in the wrong place.
* 82: are -> is
* 91: information that carried by the structure in data, thus -> information carried by the structure in the data, and thus
* Fig1 caption learn -> learns
* 103: potential -> potentially
* 112: represents -> represent (adding a second "let" would also be nice)
* 113: denotes there -> denotes that there
* 114: maps node feature matrix -> maps the node feature matrix
* 143: with theoretical -> with a theoretical
* 168 for node -> for the node
* 178: of K respect to -> of K with respect to
* Algo1, Input: a stepsize -> stepsize
* Algo1, Output: parameter w of GNN -> parameters w of the GNN
* 192ff: please use some tool like grammarly here, I marked 7 places in this paragraph
* 218: remove "mostly" (essentially only graph transformers do not only rely on message passing)
* 217: training -> intermediate
* 233: imposed to -> imposed on
* 248: shift -> differ
* 253: low confident -> low confidence
* 336: attck -> attack (but rather exchange the heading to Robustness analysis against structural noise - that would be a much more accurate heading in my opinion)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - would it be possible to add experiments based on PNA? (or another modern backbone)
- would it be possible to add the other CL methods in the experiments? Even though they assume some independence which is typically not present?
- can an edge be included in iteration $t$ and not included in iteration $t+1$? Or are edges only added? (making this explicit in the paper would be very helpful to understand the edge selection better)
- How is the number of initial edges determined?
- Why is there no supervised loss on edges? (line 148) In Link prediction, one leaves out some percentage of the edges and tries to correctly predict them. This is clearly supervised. Why is such a scheme not applicable in this setting? (Or rather: its pretty close to the proposed strategy, would it make sense to add it?)
- Does Eq1 in combination with fractional edge weights mean that an edge weight of K is added in each iteration?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: none.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed comments and insightful suggestions.
major 1. We present the results of PNA on synthetic and real-world datasets in global reply point 2 (Table 2 in PDF), which illustrate that our curriculum learning approach consistently improves the performance of PNA backbone by 2.54\% on average.
major 2.
New experiments. We used both GCN and PNA as backbone models for additional baselines, denoted by suffixes -linear' and -root'. The results are shown in global reply point 2 (Table 2 in PDF). The results consistently highlight that our proposed approach surpasses these conventional CL techniques across all datasets. Besides, it is also worth noting that these traditional CL methods can still enhance performance in most datasets (synthetic, Computers, and ogbn-arxiv), espcially with the Root pacing function.
Choice of baselines. It is critical to note that our choice of baseline models is primarily guided by the attributes of our proposed CL strategy.
First, our contribution is not a specific GNN model, but a general curriculum learning strategy that can improve the performance and robustness of all GNN models that follow a message passing mechanism. Our experimental results substantiate this claim, demonstrating that our CL strategy consistently improves performance across various GNN backbone models.
Second, a unique aspect of our CL strategy is its dynamic alteration of the number of edges, placing it related to the domain of graph structure learning. Accordingly, we compared our method with four state-of-the-art graph structure learning methods.
p1. We use validation set or cross-validation for fair comparison over all expriments. We require all models to follow the same key model architecture hyperparameters such as the number of graph convolution layers. We require all models to follow the same hyperparameter search space, which is indiciated in our Appendix lines 562-568.
p2. As indicated in lines 171-172, our framework utilizes a non-parametric kernel function, e.g. the inner product kernel, as the decoder component. This ensures that there is no need for additional parameter training.
p3. Algorithm 1, Line 6 describes the optimization step on the mask matrix $\mathbf{S}$, which results in more edges are gradually involved.
p4. As described in line 297-305, we used a pretrained vanilla GNN model to help initialize the edges. Specifically, we use the pretrained model to extract the latent embeddings and our optimization model subsequently selects the `easiest' edges to form the initial edge set. The number of these selected edges is determined by our optimization model and parameter $\lambda$.
p5. While we discussed the scenarios that motivated our method, to extend curriculum learning to handle data dependencies, we will ensure that we provide more clarity in the later revisions.
p6. As we introduced in lines 87-93, existing CL methods for node classification generally treat nodes as independent samples during the learning process, thus can not well handle the correlation between data samples. These methods typically use heuristic metrics such as the node degree or loss as the indicator for designing the curriculum, which does not adequately address the dependencies between nodes.
p7. The primary goal of our model remains supervised learning for the node classification task. The term `unsupervised' pertains to the curriculum designing process on the edges. Given that there are no readily computable supervised metrics on edges, we employ self-supervised task for the formulation of the curriculum on the edges.
p8. Yes. In ablation studies Appendix Table 4 we have compared our full method with the version without smoothing technique, which can demonstrate the effectiveness of our smoothing techniques.
Specifically, without edge smoothing, the training loss spiked when the number of edges discretely changed, which was caused by shifts in the optimal GNN parameters.
p9. The lack of groundtruth difficulty levels for edges in real-world datasets motivated us to conducting extensive experiments on synthetic datasets, as represented in Table 1. Therefore, we constructed synthetic datasets with known ground-truth difficulty values. This design enables us to verify the capabilities of designed curriculum.
p10. We present further robustness test against noise using PNA model in global reply point 3 (Table 3 in PDF).
The results reveal that our curriculum learning approach remains robust against edge noise with the advanced PNA model as the backbone. Besides, we will change the title of this section into `Robustness analysis against topological noise' to better describe the experiments.
p11. The classes in synthetic data are arranged in a circular order, which is visualized in Appendix Figure 4. This implies that classes 1 and 10 are equally separated with 3 and 4.
p12. The theorem is important as it offers a robust theoretical basis for our approach's ability to handle the optimization challenges. As we mentioned above in p8, the discretely change of edges will result in the the spike of training loss, which is challenging for stable of the optimization process.
q1. See major 1.
q2. See major 2.
q3. Is is very unlikely but possible an edge be included in iteration $t$ and not included in iteration $t+1$, which also verified the importance of adopting edge smoothing technique in ensuring the training stability.
q4. See p4.
q5. Our method operates in this manner, and we term it a self-supervised task, as the supervised learning focuses on node classification.
q6. Our method can automatically determine the number of edges to be involved in each iteration, and it does not has to be a fixed constant.
---
Rebuttal Comment 1.1:
Comment: I would like to apologize for the very late reply and thank the authors for their explanations and additional experiments. The rebuttal partly alleviated my concerns (especially the generalization part to other backbones seems to hold, generalizing from the "ancient" models used in the first iteration).
P1 (fairness): I was mostly referring to the "two layer" restriction (line 307) which is suboptimal in many settings. Especially when one uses skip-connections (which should have become the default already some years back as it is never detrimental), deeper networks often perform much better. Even for a paper focusing on a new CL strategy, I believe that experiments in settings closer to SOTA models are helpful (and that includes using tricks like skip-connections and virtual nodes).
Q3: it would be highly appreciated to add this comment to the main paper as this behavior is different from what has been described before on an intuitive level.
Q6 follow-up: if K is not a constant but rather computed during runtime, how does the algorithm make sure that edges are indeed added? -> I found that in the response to another review, highlighting that $\lambda |S-A|$ is part of the optimization. And there is no guarantee that edges are added every round, it is just encouraged.
As also highlighted by other reviewers, different pacing functions seem to be a key experiment that should not be hidden in the appendix.
As pointed out by the first reviewer, the submission is harder to read than necessary. I believe that a revised manuscript could be much clearer and easier to follow. Furthermore, I would like to encourage additional time for proofreading for the camera ready version or next iteration of the paper. As a reader, I really want to think only about the method and not about all those grammar mistakes.
Although I believe that content-wise, the paper has enough novelty for the conference (and also experiments that support it) and would thus qualify as clear accept, I tend to keep my "weak accept" score, as the presentation should indeed be clearer. Even after reading the responses, I would not feel confident to reimplement the method in another project. For example, the function $g(S;\lambda)$ seems to be extremely important, but then its definition (not really longer) is somewhat hidden in line 185. Overall, the idea to use link prediction as a proxy to gauge how well edges are expected and then using those values to design a CL rule that adaptively includes more and more edges is not something that should be hard to describe. | Summary: This study addresses the challenge of varying learning difficulties among edges in a graph and proposes a curriculum learning approach that gradually incorporates more difficult edges. Experimental results on synthetic and real-world datasets demonstrate the effectiveness of the proposed method in improving accuracy and robustness.
Strengths: 1. The research addresses an interesting and important problem of learning diverse difficulties among graph edges and understanding graph structures.
2. The proposed method, which employs a self-supervised approach to measure edge difficulties, is novel. The motivation behind the curriculum learning method is well-founded.
3. Significant accuracy improvements are observed on synthetic datasets across various settings. Figure 2 provides clear evidence of the method's effectiveness in enhancing robustness against noisy edges.
Weaknesses: 1. The improvements on real-world datasets are not substantial.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the inconsistent significance of the performance improvement in Table 2 be attributed to the edge selection method's limitations on these datasets? It appears that RCL performs better on large-scale datasets. Is there a specific reason for this observation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
W1. `The improvements on real-world datasets are not substantial.'
A1: We would like to highlight, in line 323-330 and line 331-335, that our model: (1) Tops performance in 26 out of 27 tasks across nine real-world datasets, signifying its effectiveness; (2) Shows consistent improvement over three different GNN backbone models, affirming its generalizability; (3) Produces statistically sound results, outperforming the second-best model in 43 out of 48 tasks with a significance of $p<0.05$, and in 38 out of these 43 cases, with a significance of $p<0.01$.
Q1. `Could the inconsistent significance of the performance improvement in Table 2 be attributed to the edge selection method's limitations on these datasets? It appears that RCL performs better on large-scale datasets. Is there a specific reason for this observation?'
A1. As we mentioned above, our model can consistently improve the performance of backbone GNN by leveraging the designed curriculum. Although there still exists difference on the performance improvements level on different datasets, it is extremely difficult to understand the reason behind it due to the lack of ground truth difficulty level of edges.
However, a plausible explanation for the seemingly smaller absolute improvement on certain datasets (such as CS, Physics, and Photo) is that the underlying models for these datasets already yield a notably high prediction accuracy (exceeding 90\%). Consequently, there is less room for significant improvement.
In contrast, for large-scale datasets, the base models tend to exhibit lower performance (around 70\%), thus providing more opportunities for our model to enhance their performance. | Summary: Summary
This work explores continual learning on data that is not independent, but has dependencies, such as graph edges. Three issues are raised when transferring continual learning techniques to learning on graphs: 1. there is no simple way to evaluate how easy/hard an edge is; 2. the curriculum should include a gradual way of involving more edges during training based on model performance; 3. as the GNN will observe different topologies, trainability issues might arise. The solutions consist of 1) using a self-supervised module to select K easiest edges, 2) proposing a new objective using the Lagrangian multiplier and 3) having a smooth transition between structures across iterations by having edge reweighing depend on edge selection occurrences. The continual learning scheme proposed is applied to standard GNNs such as GCN, GIN, GraphSAGE, showing improvements for most of them on synthetic and real-world node classification datasets.
Strengths: I find that the paper is above the acceptance bar in its current form. In particular, the area is of interest to the graph community, the proposed approach is sensible, the writing is clear and all experiments were successful in the sense that the continual modifications to the adjacency improve the final performance over having the same GNN model use the given graph as input. The visualisation of the learnt edge curriculum also provides interesting insights.
Weaknesses: My understanding is that, based on the fact that edges are selected based on similarity in embedding space, the proposed approach might struggle with heterophilic graphs. While I appreciate the synthetic experiments that emphasise the good performance at different levels of homophily, I would encourage the authors to also consider real-world heterophilic datasets, such as those proposed in [1] or [2].
[1] - Lim, Derek, et al. "Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods." Advances in Neural Information Processing Systems 34 (2021): 20887-20902.
[2]. - Platonov, Oleg, et al. "A critical look at the evaluation of GNNs under heterophily: are we really making progress?." arXiv preprint arXiv:2302.11640 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors only briefly mentioned limitations of the proposed approach, I encourage them to include a paragraph discussing them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your valuable comments and acknowledgement of our work.
Q1. `My understanding is that, based on the fact that edges are selected based on similarity in embedding space, the proposed approach might struggle with heterophilic graphs. While I appreciate the synthetic experiments that emphasise the good performance at different levels of homophily, I would encourage the authors to also consider real-world heterophilic datasets, such as those proposed in [1] or [2].'
A1. Thank you for your suggestion and we have expanded the experiments on six real-world heterophilic datasets. The results are presented in global reply point 1 (see PDF Table 1), which indicate consistent improvements using our method over baseline GNN models on these heterophilic datasets. It is worth noting that the improvements in these datasets are even more significant than the homophily datasets used in our datasets. Specifically, RCL outperforms the second best method on average by 5.04\%, and 4.55\%, on GCN and GIN backbones, respectively.
Although the inner product decoder we utilized might imply an underlying homophily assumption, our method appears to still benefit from leveraging the edge curriculum present within the input datasets. A reasonable explanation is that standard GNN models are usually struggled with the heterophily edges, while our methodology designs a curriculum allowing more focus on homophily edges, which potentially leads to the observed performance boost.
In addition, we will include a paragraph to discuss the limitations of our work in future revisions.
---
Rebuttal Comment 1.1:
Comment: I’d like to thank the authors for their response. While I appreciate the effort to run RCL on heterophilic graphs, the datasets chosen have been found to have significant limitations in [2] that, once fixed, make the improvements over standard GNNs irrelevant. This is the reason why I initially suggested running RCL on [1] or [2]. In this context and as I am not sure how reliable the experiments of RCL on these datasets are, I will maintain my score.
[1] - Lim, Derek, et al. "Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods." Advances in Neural Information Processing Systems 34 (2021): 20887-20902.
[2]. - Platonov, Oleg, et al. "A critical look at the evaluation of GNNs under heterophily: are we really making progress?." arXiv preprint arXiv:2302.11640 (2023). | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for your efforts in providing critiques and suggestions to our work. We summarize the newly expanded experiments and frequent questions as below:
1. We have included new experiments on six real-world heterophilic datasets. As shown in PDF Table 1, our method consistently improve performance of backbone GNN models on these heterophilic datasets. Secifically, RCL outperforms the backbone GNN on average by 5.04\%, and 4.55\%, on GCN and GIN backbones, respectively. The results can demonstrate our method is not limited to homophily graphs.
2. In PDF Table 2, new experiments that adopt modern GNN architecture - PNA model [1] have been added. From the table we can observe that our proposed method improves the performance of PNA backbone by 2.54\% on average, which further verified the effectiveness of our method under different choices of backbone GNN model.
In addition, in Table 2 we further include two traditional CL methods for independent data as additional baselines, following classical works [2,3]. We employed the supervised training loss of a pretrained GNN model as the difficulty metric, and selected two well-established pacing functions for curriculum design: linear and root pacing, defined as follows:
$$\mathrm{Linear\colon } K_{\mathrm{linear}}(t) = \frac{t}{T}|V|;$$
$$\mathrm{Root\colon } K_{\mathrm{root}}(t) = \sqrt{\frac{t}{T}}|V|,$$
where $t$ is the number of current iterations and $T$ is the number of total iterations, and $|V|$ is the number of nodes.
We utilized GCN and PNA as backbone architectures, identified by the suffixes '-linear' and '-root'. Across all datasets, the results consistently demonstrate that our proposed method outperforms traditional CL approaches.
[1] Corso, Gabriele, et al. "Principal neighbourhood aggregation for graph nets." Advances in Neural Information Processing Systems 33 (2020): 13260-13271.
[2] Bengio, Yoshua, et al. "Curriculum learning." Proceedings of the 26th annual international conference on machine learning. 2009.
[3] Kumar, M., Benjamin Packer, and Daphne Koller. "Self-paced learning for latent variable models." Advances in neural information processing systems 23 (2010).
3. We present further robustness test against random noisy edges by using the PNA backbone model. The results are shown in PDF Table 3, which further proves that our curriculum learning approach improves the robustness against edge noise with the advanced PNA model as the backbone.
4. We would like to clarify that while utilizing a pre-trained GNN model can help initialize the edges set, because its capacity to provide meaningful node embeddings to guide designing curriculum. However, the pre-trained model is not a necessity by our method. Our technique can operate effectively without such a setting. Our ablation studies in Appendix Table 4 can confirm that even without the pre-trained model to initialize, the method can still establish meaningful curriculum for numerous datasets.
Pdf: /pdf/339ec573b6367a4503cb41b7587026385debf20b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a curriculum learning (CL) method for graph neural networks on the node classification task. Existing CL strategies are mostly designed for indepedent data samples, and cannot trivially generalize to graphs that contain data dependencies. The proposed solution, termed as Relational Curriculum Learning (RCL), learns to select edges from easy to hard in each iteration. The overall idea is to formulate curriculum learning as an optimization problem w.r.t. a discrete mask over the edges. Since it is hard to optimize the discrete mask, the authors relax the goal and alternatively optimize the GNN parameters and the edge mask in an EM fashion. Tricks like edge reweighting are applied to stablize the change in the edge mask. Experiments on synthetic datasets and 9 real-world datasets verify the effectiveness of RCL, though the improvement on real-world datasets are marginal given that RCL requires a pre-trained vanilla GNN model for initialization.
Strengths: - S1: This paper proposes a curriculum learning method tailored to the need of node representation learning. Unlike classical CL methods that learn to select easy samples, RCL uses all node labels but learns to select easy edges for GNN propagation.
- S2: This paper conducts experiments on both synthetic datasets and real-world datasets. It achieves consistent improvement on 3 backbone GNN models and 9 real-world datasets, though the improvement is marginal.
Weaknesses: - W1: This paper is badly written and not very easy to follow. The title is not precise considering the proposed model. I would suggest changing it to “Edge Curriculum Learning for Node Classification with Graph Neural Networks”, as the model only applies to homogeneous graphs rather than multi-relational graphs. The 2nd paragraph in the intro doesn’t have a good undelying logic. For example, why traditional CL strategies are insufficient is not clarified. The challenges claimed in the 3rd paragraph are also very weak. To my understanding, the first challenge might be a real one for classical CL methods, but the other two challenges seem to be fabricated for the proposed model, not general to CL methods. The first 2 paragraphs of Sec. 4 just repeat two paragraphs from the introduction.
- W2: There are some math and concept errors in the core statement of the proposed model. Eqn. 2 is not a faithful usage of Lagrange multiplier (or KKT conditions since the constraint is an inequality), since the number of edges K is missing there and the adjacency matrix A is introduced from nowhere. The alternative optimization in Algorithm 1 and Line 192-205 is not proximal optimization. Proximal gradient is the method that optimizes a continuous surrogate with gradient and projects the solution back to a discrete space. Here you are just alterate the optimization of the GNN model and the continuous mask, which is more like the EM algorithm.
- W3: The proposed RCL model lacks a clear high-level idea that can educate the community, nor an excellent performance that can make this model an off-the-shelf tool. Given that RCL requires a pretrained GNN model and new hyperparameters, needs a relatively complicated process for optmization, and only achieves slightly better results on real-world datasets, I doubt its signficance to the community.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - Q1: Line 12-15: How does the proposed model handle the data dependency issue? It is not explicitly mentioned.
- Q2: Line 30-42: Can you describe the basic idea of traditional CL strategies and why they are insufficient for graphs? Can you describe how the dependencies in graphs impose a challenge for traditional CL methods?
- Q3: Line 51-57: Why is it difficult to design an appropriate curriculum to gradually involve edges? Does the difficulty for convergence only hold for your method or any CL methods in general?
- Q4: Line 65-66: You introduced the optimization model as a solution to an appropriate learning pace. How does the optimization model result in autoamtically increasing the number K to involve more edges? Some logic or description is missing here.
- Q5: Line 163: Does $\odot$ mean element-wise multiplication here?
- Q6: Line 170: The constraint, plus the residual errors, guarantee that only the most K well-expected edges are selected, right?
- Q7: Line 171-172: Please cite VGAE[1] for this dot-product style link prediction design.
- Q8: Algorithm 1 Line 6: Is the argmin taken over continuous S space or binary S space? I presume it’s continuous space here.
- Q9: Line 197-198: What do you mean by “the proximal terms” here?
- Q10: How do you guarantee that the number of edges grow monotonically during the curriculum learning?
- Q11: Why is the performance on Cora, Citeseer, PubMed higher than literature? Did you use a split other than the original split in the GCN paper? The authors should clarify that.
- Typos:
- Title: Graph Neural Network → Graph Neural Networks
- Line 11: cannot be trivially generalized to → cannot trivially generalize to
- Line 195: You may add a reference link to Algorithm 1.
- Line 200: extracts → extract
- Line 219: recursively → iteratively
- Table 2: GraphSage → GraphSAGE
- Sec. 5.3: attck → attack
[1] Kipf and Welling. Variational Graph Auto-Encoders. NIPS 2016 workshop.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors don’t discuss the limitations and societal impacts in the paper. I would suggest the authors adding one paragraph to discuss that. For example, some limitations could be that RCL requires a longer training time since it needs to first train a vanilla GNN model and then finetune it with curriculum learning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed assessment and valuable suggestions.
R1: The term `Relational' in our title is intended to emphasize our research on integrating inter-node relationships into Curriculum Learning (CL) strategies for GNN models.
`why traditional CL strategies are insufficient is not clarified'
We have elaborated why traditional CL strategies are insufficient in lines 40-42. Traditional CL strategies can only deal with independent data samples like images, which neglects relationships and structures in dependent data. Please refer to the second point in the global reply for extended comparison experiments with traditional CL methods, providing further empirical evidence.
`...challenges seem to be fabricated for the proposed model, not general to CL methods'
The challenges are not fabricated solely for the proposed model but indeed are general to CL methods on handling graph edges.
(1) Formulating a progressive, edge-inclusive curriculum is a fundamental challenge for all CL methods. Our ablation studies (refer to Appendix Table 4) reinforce this point by showing that various previous involving functions struggle with performance. The results reveal the importance of an appropriate curriculum on selecting edges.
(2) The drift of optimal GNN parameters, discussed in lines 55-57, is a universal challenge arises from the necessity to modify the number of edges, which is not unique to our method. For example, graph structure learning models.
R2: `Eqn. 2 is not a faithful usage of Lagrange multiplier...'
We acknowledge the need for more clarity regarding the use of the Lagrange multiplier method in Eqn. 2.
We note that the inequality $||\mathbf{S}||_1 \geq K$ in Eqn. 1 is equivalent to the equality $||\mathbf{S}||_1 = K$. This is because the second term $\beta \sum\_{i,j} \mathbf{S}\_{{ij}} \mathbf{R}\_{{ij}}$ in the loss function would always encourage fewer selected edges by the mask matrix $\mathbf{S}$, as all values in the residual error matrix $\mathbf{R}$ and mask matrix $\mathbf{S}$ are nonnegative. This aligns with our motivation discussed on line 163 of the paper, `To filter out the edges with $K$ smallest residual error'.
Given this, we can incorporate the equality constraint as a Lagrange multiplier and rewrite the loss function as
$\mathcal{L}= L_{{GNN}} + \beta \sum\_{i,j} \mathbf{S}\_{{ij}} \mathbf{R}\_{{ij}} - \lambda (||\mathbf{S}||_1 - K)$.
As $K$ is a constant value, minimizing the loss function is equivalent to minimizing the Eqn. 2 in our paper:
$\min\limits\_{\mathbf{w}, \mathbf{S}} L\_{{GNN}} + \beta \sum\_{i,j} \mathbf{S}\_{ij} \mathbf{R}\_{ij} + \lambda || \mathbf{S} - \mathbf{A} ||,$
where $\mathbf{A}$ is the input adjancency matrix.
`...is not proximal optimization.'
We acknowledge that this is a misusage of terminology. `EM-style alternative optimization' is more appropriate in describing our method.
It is worth noting that the functionality of our proposed method should remain valid, despite this misuse of terminology.
R3.
1. Our method does not strictly require a pretrained GNN model. Refer to global reply point 4 for details.
2. Our proposed RCL methodology consistently enhances the performance of GNN models across a variety of GNN backbones with statistical soundness p>0.01 in 38 out of 48 tasks.
3. Addressing the high-level idea, the RCL model introduces a novel perspective on handling data dependencies with curriculum learning, which can stimulate further research in both curriculum learning and graph representation learning areas.
(Belows are replies to questions)
A1. In the lines 109-110, we described treating data samples as nodes and their dependencies as edges. We then devised a CL strategy that leverages the inherent difficulty level of the edges to enhance the performance of the GNN model.
A2. As we discussed in lines 48-50 and 122-124, traditional CL strategies usually use supervised computable metrics (e.g. training loss) to first quantify sample difficulty, and then gradually incorporate edges from easy to hard during the training process. However, quantifying the difficulty level of edges where no supervision is available is challenging, since supervised tasks typically associated with nodes.
A3. See reply in R1 above.
A4. We elaborated in Section 4.2 (lines 185-191). The regularization term $g(\mathbf{S};\lambda) = \lambda ||\mathbf{S}-\mathbf{A}||$ in Eqn. 2 allows control over edge incrementation through parameter $\lambda$, which increases with the number of training epochs. As $\lambda$ grows, the term $g(\mathbf{S};\lambda)$ push the mask matrix $\mathbf{S}$ to gradually approach the input adjacency matrix $\mathbf{A}$, thus progressively involving more edges in training.
A5. Yes, it means element-wise multiplication.
A6. Yes, as we mentioned above in response to `W2', only the most $K$ well-expected edges will be selected.
A7. We will add the citation in our later revision.
A8. Yes, it is continuous space in Algorithm 1 Line 6.
A9. We refer to the last term of Algorithm 1 Line 3 and Line 6.
A10. As we illustrated in response A4, the discrepancy penalty between the mask matrix $\mathbf{S}$ and the input adjacency matrix intensifies as $\lambda$ increases. This ensures a progressive increase in edge involvement during learning, continuing until the number of selected edges equals the total input edges.
A11. In line 285, we have clarified that we follow the data splits from previous study on these three datasets, which is a commonly used split (adopted by Pytorch-geometric library, refer to 'full' split).
We commit to correcting all the typographical errors and will include a paragraph discussing the limitations of our work in our subsequent revision.
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Thanks the authors for their response. The authors addressed some of my conerns but the major concern about the contribution and significance remains.
**W1**
I understand that the authors chose the title to emphasize the selection of edges in their method. My concern is that the term "relational" usually refers to graphs with typed edges, e.g. knowledge graphs. Therefore, I feel edge curriculum learning might be more precise here.
For the weakness of traditional CL methods, I know the authors tried to claim that in Line 40-42, but my original concern is that Line 40-42 are not well supported. It would be better if the authors can add a sentence to illustrate the high-level idea of traditional methods and why they fail on graph data.
I am still not convinced that the last two of three challenges are not fabricated. Edge curriculum learning is the technique proposed in this paper, so challenge (2) is more like for the proposed technique, not general to CL methods on graphs. (3) is like a vague challenge that can seemingly fit any GNN (not necessary CL methods) and there isn't a good way to verify the proposed method really solved this challenge.
**W2**
The authors are correct. If one read Eqn. 1 alone, the only solution is to use KKT conditions. The authors should mention that the residual error $R_ij$ is non-negative, thereby converting the inequality to equality. Lagrange multipliers can only be applied to equality constraints.
Thanks for acknowledging the terminology problem.
**W3**
If I understand correctly, the baseline method CLNode doesn't use pretrained GNNs. Therefore, I would suggest using RCL w/o pre-trained GNNs for a fair comparison. The improvement seems to be marginal compared to CLNode. **Given that baselines seem to be reproduced by the authors in a setting different to their original papers, I doubt whether such marginal improvement is real or not, as it may be caused by different levels of engineering efforts on baselines and the proposed RCL.** Other reviewers and AC may correct me if they can confirm these results are significant enough for the community.
**Q11**
Thanks for confirming the data splits. Since methods like CLNode is different from the settings in its original paper, please clarify which results you reproduced from the official code and which results you copied from the original paper.
---
Reply to Comment 1.1.1:
Title: Response to Discussion (1/2)
Comment: R1.``I understand that the authors chose the title to emphasize the selection of edges in their method. My concern is that the term "relational" usually refers to graphs with typed edges, e.g. knowledge graphs. Therefore, I feel edge curriculum learning might be more precise here.''
Thank you for your suggestion. We will emphasize that our method is for designing curriculum for graph edges and avoid the term `relational' in the revised title.
``For the weakness of traditional CL methods, I know the authors tried to claim that in Line 40-42, but my original concern is that Line 40-42 are not well supported. It would be better if the authors can add a sentence to illustrate the high-level idea of traditional methods and why they fail on graph data.''
We did not claim traditional CL methods fail on graph data. Instead, we argue that traditional CL methods are not designed to handle the curriculum of the dependencies between nodes in graph data, which are crucial. Traditional CL methods focus on nodes' curriculum according to the difficulty of predictions on individual nodes. However, we aim to learn the curriculum of edges which requires inferring the difficulty levels of edge predictions based on the information of nodes and their dependencies.
In addition, experimental results demonstrated our method, which addresses edge curriculum, outperforms those that don't, as shown in Table 2 in global rebuttal PDF. Specifically,
across all seven datasets, our method consistently outperformed comparison methods by 3.3\% on average.
``I am still not convinced that the last two of three challenges are not fabricated. Edge curriculum learning is the technique proposed in this paper, so challenge (2) is more like for the proposed technique, not general to CL methods on graphs. (3) is like a vague challenge that can seemingly fit any GNN (not necessary CL methods) and there isn't a good way to verify the proposed method really solved this challenge.''
The challenge (2) is to emphasize the importance of designing a proper pacing function for CL methods on graph data. As existing CL methods for graph data typically use fixed pacing function to involve samples, they can not provide flexibility to adjust the learning pace that is optimal to the model training status. Designing an adaptive pacing function for handling graph data is difficult since it requires joint optimization of both supervised learning tasks on nodes and the number of chosen edges. Therefore, this challenge is not just about the edge curriculum but about the open problem of adaptive pacing of CL in graph data. We will highlight this in the revised version.
Challenge (3) is of interest to the community of graph structure learning, which studies the joint optimization of graph neural network models and graph structures. Our experimental results demonstrated this is indeed an open problem in this community and our technique has effectiveness in solving it. We have produced a new figure shows that without the smoothing technique, the training loss spiked that reflects the GNN parameter shifts, which was caused by the number of edges discretely changed. However, after adding the smoothing technique, the training loss can smoothly converge, hence, the smoothing technique plays an important role in stabilizing the training process. We can not post link to figure as required by NeurIPS official comment rule but we will include the figure in the revised paper. | null | null | null | null | null | null |
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks | Accept (spotlight) | Summary: This paper studies the feature learning capability of a 3-layer neural
network, where the bottom layer is random and fixed, the middle layer is
trained for only one step from zero, and the upper layer is trained in
the rest of the gradient descent steps. The paper characterizes the
richer feature learning capability of this 3-layer network. That is, the
one-step training of the middle-layer weights essentially maps the
original random features to another space, which enlarges the type of
functions that can be learned. Two special cases are then considered:
(i) the single index model (similar to [18]); and (ii) more
significantly, the quadratic feature model (more complex than [18]). The
results demonstrate a significant reduction in the sample complexity
compared to kernel methods. Further, the latter class of functions in
(ii) cannot be approximated by 2-layer networks, and thus a
depth-separation between 2-layer and 3-layer networks is established.
Strengths: 1. The results are novel and solid. The precisely characterization of
the feature learning capability of this particular 3-layer network
(Theorem 1), and the ability to learn functions of quadratic features,
are both significant contributions to the field.
2. The paper is also very well-written and presents the main intuitions
quite clearly.
Weaknesses: Nothing particular.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Can the authors comment on other possible features that this
approach can learn effectively (beyond quadratic features)?
2. The one-step training of the middel-layer weights seems to be a
crucial element for learning a specific type of features. At the same
time, one-step training could be limiting. Can the authors comment on
possible generalizations for multi-step training on the middle-layer?
Post rebuttal phase:
The reviewer wishes to thank the authors for their response and preliminary thoughts on these questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The discussion of limitations is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and thoughtful review, and address specific comments below.
> “Can the authors comment on other possible features that this approach can learn effectively”
- We hypothesize that our results here can be extended to learn arbitrary polynomial features, and thus allow us to learn hierarchical functions $f^* = g^* \circ h^*$ where $h^*$ is a polynomial, with sample complexity scaling with the degree of $h^*$. However, we don’t currently have any concrete results in this setting and hence defer this to future work.
> “Can the authors comment on possible generalizations for multi-step training on the middle-layer?”
- While initial works studying feature learning in two-layer neural networks such as [8, 18] relied on the single-step training procedure, later works [2] studied training $W$ for multiple steps and were able to improve the dimension dependence in the sample complexity. For three layer neural networks, it is possible that a more refined analysis of multi-step training could improve the sample complexity. However, analyzing the training dynamics of $W$ for multiple steps introduces a number of new technical challenges which will likely require developing new techniques. As such, we defer this to future work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I will keep my review score. | Summary: The paper theoretically studies the feature learning in three layer neural networks. For the analysis, it considers layer-wise GD; more precisely, the first layer is not trained, the second layer is trained for one step, and then there is the training for last layer. Particularly, they show that three layer neural networks achieve a better bound for learning functions of the form $g(x^TAx)$ comparing to the known bounds for two-layer neural networks. They also show an optimization-based separation showing that three-layer NNs can learn functions that two-layer NNs cannot learn with polynomial width and time (assuming a non-increasing learning rate).
Strengths: - The paper considers the optimization of three-layer NNs with a layer-wise training and provides a fairly general approach for studying their feature learning capabilities (in the paper, the features have also been characterized for the examples considered).
- For functions of the type $g(x^TAx)$, a sample complexity upper bound for three-layer NNs is proved which is more efficient than the known upper bounds for two-layer NNs.
- There is an optimization-based separation result provided between two-layer and three-layer NNs.
- Paper is generally very well written.
Weaknesses: - Generally the limitations of the work have been discussed in the paper. These limitations are usually quite common in the deep learning theory literature. For example, the training is layer-wise, and the first layer is not trained at all and second layer is only trained for one iteration. In this sense, the work is similar to theoretical analysis done on two-layer NNs in many recent works.
- The simulations have been implemented for the settings as the theorems. However, it would have been interesting to also run experiments with the more common setting (e.g., training all parameters together) for both two-layer and three-layer NNs and compare the results with the results of the model in the theoretical settings (e.g., with layer-wise training). This would potentially show if the assumptions made in theory are too restrictive.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Can the analysis be done in the SGD setting as well?
Here are also a few typos:
- line 246: $\mathbb{E}[(g^{*})'] = 0$ instead of $\mathbb{E}[g'] = 0$
- line 291: inconsistency regarding $P_2 f^*$ (e.g., with equation 16) (I think also the same problem appeared in the appendix.)
- line 298: is it the intended equation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There is no negative societal impact. The limitations of theory have been discussed in the work and are (unfortunately) quite common in the current literature of deep learning theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and thoughtful review, and address specific comments below:
> “However, it would have been interesting to also run experiments with the more common setting (e.g., training all parameters together) for both two-layer and three-layer NNs…”
- We agree that this is an interesting experiment to run, and will add it to a future revision of our paper.
> “Can the analysis be done in the SGD setting as well?”
- For the first stage we do need a large batch size, but the second stage can be done in the online SGD setting. Since the loss is convex in $a$, the analysis for this would be a standard application of online convex optimization results.
Thank you for pointing out the typos, we will update these in the next revision of our paper. Line 298 should read $\mathbb{K}f^*(x) = \Theta(d^{-2})\cdot(x^TAx + o_d(1))$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will maintain my score. | Summary: In this work the authors show that there is a three layer neural network setup with better provable learning guarantees than the current best bound for two layer setups. The setup involves a randomly initialized layer with frozen weights, which feeds into a two layer network where first the hidden layer weights are trained, and then the output weights are trained. In the case of functions of quadratic features, this three layer (but two learnable layers) network is able to use feature learning to re-weight the initial random feature kernel to improve training efficiency.
Strengths: The model used in the paper is a simple and sensible extension of the two layer network. The learning algorithm is also quite reasonable, both in its connection to previous work as well as to the practical end of deep learning.
The arguments seem correct (though I did not fully validate the detailed proofs in the appendices), and the intuitive explanation relating feature learning and eigenspace weighting helps shed light on the importance (and perhaps mechanisms of) feature learning in deep networks. The bounds in the 3 layer case are a marked improvement over the kernel learning and 2 hidden layer case bounds.
Weaknesses: There are two concerns with this paper. The first is a question of the tightness of the bounds. While the bounds show great improvement over the quoted bounds in the two hidden layer case, it is not clear how tight these bounds are in the various scenarios discussed.
Relatedly, there is a big question as to whether or not the bounds are useful for understanding the success of deep learning systems even on simple problems (e.g. FCN on MNIST). In particular, the proof sketch seems to suggest that just a single step of GD in the middle layer is enough to induce massive improvements in the sample complexity; however even in simple settings, it seems that it is helpful to both learn in multiple layers, as well as over multiple timepoints. I note that this is a general weakness of similar sample complexity analyses and not of this particular paper, and the authors do mention this in the discussion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What evidence is there that the difference between the 2 and 3 layer bounds will persist even as theoretical techniques/the bounds themselves are improved?
How do the 3 layer feature learning bounds differ from NTK learning of a similar 3 layer network? (This discussion would be useful to add to the main text to emphasize the benefits of feature learning over depth alone.)
Can the method be used to show anything about the usefulness of running gradient descent over a longer period of time? How does this trade off with the sample complexity?
Is there any intuition about good choices of the function q in the bounds, in a more general setting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and thoughtful review, and address specific concerns of yours below.
> “While the bounds show great improvement over the quoted bounds in the two hidden layer case, it is not clear how tight these bounds are in the various scenarios discussed.”
- For the quadratic feature setting (Section 4.2), the information theoretic lower bound for the sample complexity is $d^2$, as there are $O(d^2)$ free parameters in the matrix $A$. Our theorem 3 shows that the three-layer network can learn the target with $d^4$ samples, which is within a polynomial factor of this optimal sample complexity (but crucially still better than kernel methods or two-layer networks). Similarly, the information theoretic lower bound for single-index models is $O(d)$, while our algorithm obtains an $O(d^2)$ sample complexity. Improving the sample complexity to the information theoretic threshold is an interesting direction of future work.
> “...there is a big question as to whether or not the bounds are useful for understanding the success of deep learning systems even on simple problems (e.g. FCN on MNIST)....it seems that it is helpful to both learn in multiple layers, as well as over multiple timepoints.”
- We certainly agree that there is more work to be done in understanding the effectiveness of deep learning on realistic datasets. In more realistic settings, it is plausible that training all layers jointly and for multiple timesteps leads to improved results over the layerwise training procedure. We view our work as elucidating the capabilities of three-layer networks, in particular in comparison to two-layer networks. Specifically, our results provide a setting in which three-layer networks can efficiently learn a class of functions that cannot be efficiently learned using two-layer networks.
- Furthermore, the recent work [43] shows that shallow MLPs trained via standard techniques on image datasets do learn features corresponding to the linear feature obtained via a single step of GD for two-layer networks. An interesting direction of future study would be to understand whether this phenomenon also holds for three-layer networks.
> “What evidence is there that the difference between the 2 and 3 layer bounds will persist even as theoretical techniques/the bounds themselves are improved?”
- We remark that our lower bound in Section 5 shows that no polynomially sized 2-layer network can express $f^*$ below an error threshold. This implies that no algorithm can learn $f^*$ with polynomially many samples / in poly time. Thus even if more refined bounds were proven for 2 layer neural networks, $f^*$ would still not be learnable and the separation would still exist.
> “How do the 3 layer feature learning bounds differ from NTK learning”
- The NTK is a kernel method, and thus the discussion in Section 3, point 2 also applies to the NTK. Since the NTK for any depth is a rotationally invariant kernel, the lower bound from [27] applies. Therefore, in the quadratic feature example in Section 4.2, $d^{2p}$ samples are needed to learn $g^*(x^TAx)$ for a degree $p$ polynomial $g^*$ via the NTK. We mention this point in line 305, but we will update the exposition to make clear that this lower bound applies to the NTK for a depth 3 network as well.
> “Can the method be used to show anything about the usefulness of running gradient descent over a longer period of time?”
- Understanding the sample complexity benefit of training $W$ for longer is indeed an interesting direction of future work. While initial works studying feature learning in two-layer neural networks such as [8, 18] relied on the single-step training procedure, later works [2] studied training $W$ for multiple steps and were able to improve the dimension dependence in the sample complexity. For three layer neural networks, it is possible that a more refined analysis of multi-step training could improve the sample complexity even further. However, analyzing the training dynamics of $W$ for multiple steps introduces a number of new technical challenges which will likely require developing new techniques. As such, we defer this to future work.
> “Is there any intuition about good choices of the function q in the bounds”
- For the general theorem, if $f^*$ possesses the hierarchical structure that $f^* = g^* \circ h^*$ where $h^*$ is the learned feature, then one should choose $q = g^*$ in the main theorem. In the examples in Section 4, we see that a good choice is $q = g^*$; however, we don’t have a clean description of the optimal $q$ in the general setting.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I thank the authors for their detailed responses. Between their comments here and their comments to the rest of the reviewers, my main concerns have been addressed and I will update my review score. | Summary: This paper analyzed the features learned by a three-layer network trained with layer-wise gradient descent as existing analyses are largely restricted to two-layer networks. It presented a general purpose theorem that upper bounds the sample complexity and width needed to achieve low test error when the target has a certain hierarchical structure.
Strengths: - This paper has shown that a three-layer network can learn nonlinear features, although existing work only has shown that two-layer neural networks learn only linear functions of input.
- It’s good that the paper has an example section (Sec. 4.)
Weaknesses: - The “Hierarchical function” that this paper deals with is not early enough defined or introduced. It will make the problem that the paper solved more clear if that point is made clear in an earlier part of the paper.
- This work still does not answer the same questions for convolutional networks.
- This paper certainly showed that three-layer networks have provably richer feature learning capabilities than two-layer networks. Nonetheless, it is not guaranteed that the findings and proof can still be applied to any general “depth” as it has shown for the change from the two-layer to three-layer (although it might be high-likely.)
- In other words, this issue is about generalizability on depth scalability.
- This paper lacks discussion of its own limitations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do the authors think this can be applied to convolutional networks and networks with more depth than three layers?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No limitations are discussed in the paper. It would have been nicer if the authors discussed the generalizability of the work to a deeper network.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and thoughtful review, and address specific comments below.
> “It will make the problem that the paper solved more clear if that point is made clear in an earlier part of the paper.”
- Thank you for this feedback; we will add more details about what specifically we mean by a hierarchical function to the introduction section in a future revision of our paper.
> “This work still does not answer the same questions for convolutional networks.”
- While understanding the optimization and generalization properties of convolutional networks (or other modern architectures such as transformers) is indeed an interesting question, it is beyond the scope of our current work. We remark that showing an end-to-end learning guarantee for a two-layer convolutional network beyond the kernel regime is still an open question. Prior guarantees for feature learning in fully connected networks have largely been restricted to two-layer networks, and our work makes progress on extending these results to three-layer networks.
> “Nonetheless, it is not guaranteed that the findings and proof can still be applied to any general “depth””
- We agree that it is also an interesting question to understand what kinds of hierarchical functions can be learned by networks of depth >3. We anticipate that a similar layerwise training procedure and analysis could allow us to show learnability of a class of hierarchical functions with deeper networks, but we leave investigation of this to future work.
> “This paper lacks discussion of its own limitations.”
- We included a limitation section within the supplementary material, but we can certainly move this discussion of limitations to the main text in a future revision of our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. I maintain my rating. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
In Defense of Softmax Parametrization for Calibrated and Consistent Learning to Defer | Accept (poster) | Summary: This paper considers the problem of learning to defer, and propose a softmax-based surrogate loss for the task, which is consistent and can be well-calibrated. The paper theoretically proves that one needs asymmetric losses for a bounded probability estimate---which in turn leads to a better calibration properties---(Theorem 1), and show the consistency of the proposed loss (Theorem 2). The paper also gives the risk transfer bound to the original zero-one loss (Theorem 3). Empirically, the proposed surrogate function enjoys the best of both worlds, achieving low error rates and good calibration properties.
Strengths: - Theoretical contributions of the paper is very good. Theorem 1 is a strong result, which may be useful in guiding the design of other softmax-based estimators for nonclassic (e.g., composite) learning tasks. Other two are somewhat standard, but are very essential in guaranteeing that the proposed loss works as desired.
- The proposed asymmetric softmax-parameterized loss is very simply designed. The loss can be implemented very easily and may not introduce too much computational overhead to the training procedure.
- The empirical performance of the proposed loss function is strong, consistently achieving the best performance over the tested setups.
- The paper is very clearly written, especially for a paper that is heavily loaded with various notations.
Weaknesses: - It is not clear **why** we want to stick to the softmax-based losses from the first place. It is definitely cool that one can give a better calibrated softmax-based loss, but what is the reason that we need one when we have a decent non-softmax loss? We only know post hoc that such estimator has some empirical benefits, but the reason why it works better than non-softmax one is still at mystery. In the manuscript, authors try to justify the effort saying "Given the wide use and practical advantages of softmax parameterization (...)," but the reason is never spoken out clearly. For a better understanding of the advantages of the proposed loss, we need a more detailed explanation (or maybe a good reference).
- I find that the empirical validation of the loss function to be somewhat limited. I don't usually ask for more experimental validations for theory papers. However, I think it is quite essential for this paper specifically, because none of the theoretical results actually explain why the proposed loss function should work better than the OvA loss. The main experiments are on the synthetic CIFAR100 dataset, with an additional figure on CIFAR10H (where are the "error rate" results for CIFAR10H?). This is very limited; [42] performed experiments on CIFAR10H, Hatespeech, Galaxy-Zoo, and HAM10000; [28] has experiments on CIFAR10H, CheXpert, and Hatespeech. In this sense, having only CIFAR100 results stated explicitly as numbers seems rather inadequate, as we cannot really check whether the baselines have been implemented/tuned correctly.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - What is the reason we should prefer softmax-based losses?
- Please provide more experimental results on other datasets.
- Please provide explicit numbers for the CIFAR10H experiments; this is for the sake of checking whether the baselines have been implemented and tuned correctly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are adequately stated in the manuscript, but only in the appendices.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Please provide more experimental results on other datasets.**
A1. Thank you for your constructive advice! We have added more experiments on datasets with real-world expert annotations, including Hatespeech [1] and ImageNet-16H [2] (2 tasks: “080” and “095”). The experimental results are provided in the one-page attachment. According to the results, the performance of our proposed surrogate continues to be comparable to or better than the baselines.
The detailed experimental setup is provided below. For the model used for Hatespeech and ImageNet-16H, we apply the settings in [3]. The optimizer for Hatespeech is SGD + Cosine annealing is used for all the method for 50 epochs, and the learning rate, momentum, and batch size is 0.1, 0.9, and 1000. For both of the tasks in ImageNet-16H, the optimizer is Adam and the learning rate, batch size, and epoch number are set to be 1e-3, 100, and 100. Training-validation-testing split is set to 7:1:2 for both datasets.
**Q2. Please provide explicit numbers for the CIFAR10H experiments; this is for the sake of checking whether the baselines have been implemented and tuned correctly.**
A2. Thank you for raising this concern! In this main body, we conducted experiments with data from CIFAR-10H only to visualize the result of expert accuracy estimation, and thus the accuracy for all the methods is around 50%, which is far from optimal. To better reflect the ability of expert accuracy estimation of all the methods and evaluate the other abilities at the same time, we conduct additional experiments with both data from CIFAR-10H and CIFAR-10 as in [3]. The new visualization and a table of experimental results are provided in the one-page attachment. The experimental results show that our proposed method still outperformed the baselines. While the new visualization of probability estimation of S-OVA, A-OVA, and A-SM (proposed) remains similar to those in the main body, it can be seen that the estimation of S-SM severely suffers from the problem of unbounded confidence, which again shows the necessity of the study of bounded estimators.
The detail of the experimental setup is listed below. The choice of model is the same as in [4], while the optimizer is SGD + Cosine annealing. The learning rate, momentum, batch size, and epoch number is 0.1, 0.9, 128, and 200. Training-validation-testing split is 7:1:2.
**Q3. What is the reason we should prefer softmax-based losses?**
A3. Thank you for raising this concern! In the task of ordinary multi-class classification, softmax-based loss functions, e.g., softmax cross-entropy loss, are the most widely used losses to train DNNs due to its superb performance. This is because the softmax parameterization can naturally model the K-dimensional probability simplex. However, the methods based on binary classification reduction, e.g., one-versus-all, fail to achieve this property. According to the resemblance between the task of multi-class classification and learning to defer, it is reasonable to infer that softmax-based losses could persist its advantage in L2D if implemented properly. The additional experimental results also strengthen the validity of this reason. As a result, we conclude that in the field of L2D and its probability estimation, the softmax-based losses should not be discouraged, and it is encouraged to give priority to the use of it as in the multi-class classification task.
[1]. Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated hate speech detection and the problem of offensive language. In Eleventh international aaai conference on web and social media.
[2]. Kerrigan, G., Smyth, P., and Steyvers, M. (2021). Combining human predictions with model probabilities via confusion matrices and calibration. Advances in Neural Information Processing Systems, 34.
[3]. Hussein Mozannar, Hunter Lang, Dennis Wei, Prasanna Sattigeri, Subhro Das, and David A. Sontag. Who should predict? exact algorithms for learning to defer to humans . AISTATS 2023: 10520-10545
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response and providing additional experimental results. I do feel that my concerns have been well-addressed. Raised the score.
---
Reply to Comment 1.1.1:
Title: Thank you for increasing the score
Comment: We are glad to hear that we addressed your concerns. We are really appreciated for your valuable suggestion and time on our paper and thank you for updating the score! | Summary: In this paper, the authors investigate the task of probability forecasting in multi-class classification with an expert deferral option (L2D). They address the issue of unbounded and invalid estimates of experts' accuracy that often arises in probability estimation for L2D. Furthermore, the authors highlight that this problem persists when using a loss function with a symmetric structure, such as the commonly used softmax cross-entropy loss. To overcome the problem of unbounded probability estimates and support the use of softmax parameterization in L2D, the authors introduce a novel asymmetric softmax parameterization. They demonstrate the consistency of the induced loss function and provide theoretical evidence that this loss function will never encounter the issue of unbounded estimation. The authors validate their proposed method through experiments conducted on multiple benchmark datasets, utilizing various evaluation metrics. The experimental results provide justification for the effectiveness of the proposed approach.
Strengths: 1. The authors successfully enable the utilization of softmax parameterization in L2D, which is popular but discouraged in previous works. They provide both theoretical and experimental justifications for this approach. The proposed loss formulation is intuitive and straightforward to implement, requiring only the addition of a single output without modifying the scoring outputs of the classifier.
2. The authors present a theoretical analysis highlighting the unbounded nature of symmetric losses in L2D. They demonstrate that the proposed loss function benefits from its asymmetric structure, emphasizing the non-trivial nature of their work.
3. The authors offer theoretical guarantees regarding the proposed loss function, ensuring that it will never produce an unbounded probability estimator. This rigorous analysis eliminates any concerns regarding unbounded behavior in the proposed method.
4. This work includes several figures that effectively compare the proposed method with related approaches. These figures illustrate the issues associated with unbounded probability estimators, providing persuasive evidence to aid readers in understanding the main points of the paper.
5. The authors thoroughly validate the performance of the proposed method using a variety of evaluation metrics and settings. Their results demonstrate that the proposed approach is comparable to, if not superior to, baseline methods not only in the context of probability forecasting but also in predictive tasks without coverage constraints.
Weaknesses: The authors are encouraged to provide a finite sample analysis, specifically an estimation error bound, to complement their theoretical analysis and ensure completeness. Considering that the newly proposed asymmetric softmax parameterization may alter the Lipschitz constant of the score function "g," the estimation error bound becomes a non-trivial aspect worth investigating. Additionally, combining this estimation error bound with the proposed regret transfer bound could potentially yield a more direct and intuitive result.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: In Theorem 2, it is proved that the proposed loss will never have an unbounded probability estimator. Though the proof is clear, can you provide a more intuitive explanation of such a positive result?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. An estimation error bound should be given. The Lipschitzness of the asymmetric softmax w.r.t. g should also be clarified.**
A1. Thank you for your helpful advice! We have derived the estimation error bound of the ERM with our proposed risk and will update it in the revised version of our manuscript. Though our proposed asymmetric softmax takes a different formulation, its Lipshitzness still persists: the absolute value of the partial derivative of $\tilde{\psi}_{i}(g)$ w.r.t.gj will not be larger than 1 for any i, j in [1,K+1], which indicates the Lipschitzness of the asymmetric softmax.
**Q2. A more intuitive explanation should be given to Theorem 2.**
A2. Thank you for raising this concern! A more intuitive explanation is that the existence of an unbounded probability estimator will remove some point g(x) from the potential solution. However, since any point can be mapped into the desired probability region with our proposed asymmetric softmax, any point my be the solution of point-wise risk w.r.t. some posterior probability, which leads to a contradiction. This explanation can also be seen as the scratch of the proof for Theorem 2.
---
Rebuttal Comment 1.1:
Title: To response
Comment: My concerns have been addressed. This is a good work! I am happy to accept this paper.
---
Reply to Comment 1.1.1:
Title: Thank you for supporting our work
Comment: Thank you for your constructive comments and suggestions. | Summary: This paper studies the learning to defer setup where you have to defer to an expert if the classifier is likely to be wrong. It has been shown that softmax based consistent estimators for the learning to defer losses do not provide calibrated probability estimates for the likelihood of deferring. One other work proposed non-softmax based losses which provided calibrated probability estimates but it is not known if any other softmax based loss can lead to calibrated estimates. This work shows that softmax is not the issue of the lack of calibration but using symmetric loss functions is. This work then proposes a softmax based asymmetric loss function that leads to calibrated and bounded probability estimates. This work also experimentally verifies the effectiveness of using their loss function.
Strengths: - I find the idea of the paper really interesting. Since softmax based losses are the most commonly used loss functions, it makes sense to be able to use those loss functions for learning to defer framework. Moreover, this work identifies the fundamental issue present in the original loss function which was leading to poorly calibrated probability estimates. I find this insight on using asymmetric loss functions interesting.
Weaknesses: Right now, the paper sometimes becomes hard to follow as there are a lot of notations used. One thing that could be improved is having a common notation section in the beginning which could be used to look up symbols. Overall, the presentation of the paper could be improved.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In definition 1, R^c loss is not defined?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The presentation should be improved. It is encouraged to add a separated section to introduce the used notations.**
A1. Thank you for your constructive advice! We will summarize the used notations in and add an extra section in the revised manuscript.
**Q2. The R^c loss is not defined in Definition 1.**
A2. Thank you for raising this concern! The $R^{c}_{01}$ should be the0-1-deferral risk instead and we will update it in the revised version of this manuscript.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for addressing the concerns. I would like to keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you for supporting our work
Comment: Thank you for your reply! We sincerely appreciate your valuable comments and time on our paper. | Summary: This paper shows that the miscalibration of the softmax-based surrogate loss for learning to defer is due to its symmetry. Instead, an asymmetric softmax-based surrogate loss is proposed and proved to be both calibrated and consistent. More generally, they reveal the connection between miscalibration and the symmetry of the used loss function and propose to design L2D surrogates based on asymmetric multi-class loss functions.
Strengths: The paper is well-organized and enjoyable to read though the notation is a bit heavy. They not only give a careful analysis of softmax-based loss functions but also provide very interesting insights regarding the miscalibration of surrogate loss functions for learning to defer and the symmetry of the multi-class loss functions they are based on.
Weaknesses: As stated in the paper, the 0-1 deferral loss studied here is not the most general version. For important settings including where the expert is a bigger model with higher accuracy but larger inference cost than the base model, an additional constant cost is required to reflect the extra inference cost. [31] shows that both [28] and [42] suffer from underfitting if the constant cost is non-zero. I am wondering if the proposed asymmetric softmax-based surrogate loss can be generalized to this setting and if it will have similar issues.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could you give more details on how Definition 3 is derived based on the asymmetric softmax function?
minors:
1. The notation of the risk is different in Definition 1 from in (1).
2. At the end of line 97, the transformation should map from R^{K + 1}.
3. It is better to specify the section when referring to the appendix.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Can the proposed asymmetric softmax-based surrogate be generalized to the setting that an additional constant cost will be triggered when the model choose to defer to experts? If such generalization is available, will it have the same issue of underfitting as shown in [1]?**
A1. Thank you for raising this concern! Since our surrogate can be seen as induced from a classification-calibrated multi-class classification surrogate, we can directly generalize it to the case where the additional cost $c_{0}$ is non-zero a based on the Proposition 2 of [1]: $L_{\tilde{\psi}}(u,y,m)=-\sum_{i\in[K]}(\max_{j\in[K+1]}c(j)-c(i))\log(\tilde{\psi}_{i}(u))-(\max_{j\in[K+1]}c(j) -c(K+1))\log(\tilde{\psi}_{K+1}(u)) $.
When the expert makes a wrong prediction, i.e., $\max_{j\in[K+1]}c(j)=1+c_{0}$, the surrogate can be further written as $L_{\tilde{\psi}}(u,y,m)=-\log(\tilde{\psi}_{y}(u))-c_{0}\sum_{i\in[K]}\log(\tilde{\psi}_{i}(u)) $. Notice that the second term is exactly a **label smoothing term**, which is potentially the root of underfitting as stated in [1] (in the second paragraph of Section 3.1). Based on this theoretical observation, we can infer that underfitting is likely to occur. I think combining the post-hoc method in [1] and designing surrogates that can be free from such label smoothing terms can be promising future directions.
**Q2. The authors should give more details on how Definition 3 is derived based on the asymmetric softmax function.**
A2. Thank you for raising this concern! To be detailed, we first map the scoring function from $R^{K+1}$ to $\Delta^{K}\cup[0,1]$ using the asymmetric softmax function. Then we can train a probability forecaster for classifier and expert accuracy simultaneously by minimizing the risk w.r.t. the sum of a multi-class classification surrogate and a binary classification surrogate, which is exactly Definition 3. The design of $\tilde{\psi}_{K+1}$ further binds the score of the deferral function and classifier, which lead to the property of maxima-preserving and ensure the consistency of Definition 3. We will elaborate on this point and add it to the paragraph next to Definition 3 in the future version.
**Q3. There are some minor mistakes to be corrected and the section of appendix should be specified.**
A3. Thank you for your constructive advice! We have revised our manuscript and will correct these clarity problems in the revised version of this manuscript.
[1]. Mohammad-Amin Charusaie, Hussein Mozannar, David A. Sontag, and Samira Samadi. Sample efficient learning of predictors that complement humans. In ICML, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have read the reviews and rebuttals. I agree that combining the proposed surrogate loss with the post-hoc method in [1] can be a promising future direction and will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you for keeping the positive score
Comment: Thank you for reading our response and keeping the positive score! We are really grateful for your time and expertise. | Rebuttal 1:
Rebuttal: ## General Response
We thank all the reviewers for their valuable comments and devoted time. We are glad that all the reviewers praise the insight and theoretical contribution of this work. We are also encouraged that the reviewers find this work easy to use (Reviewers Ztxz, r3aW), and appreciate the clarity of this paper (Reviewers Ztxz, fCzx, jwha, and r3aW).
We respond to each reviewer's comments in details, respectively. An additional pdf file that includes one figure and three tables is provided to further experimentally validate our method and support our claims in the responses. In the revised version, we will update the manustript according to reviewers' suggestions. We believe this will definitely enhance the quality of this work.
Pdf: /pdf/2d1de9003a1466ddab2ce913e4b435403e495120.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies the learning to defer (L2D framework), where one can defer to an expert decision when unsure about the model’s prediction, and a cost is incurred when either the prediction is wrong or when one defers to the expert and the expert makes a mistake.
The paper builds on top of prior work that shows that softmax parametrization can lead to unbounded probability estimators. While prior work provided a one-vs-all (OvA) formulation to solve this issue, this paper shows that the issue lies in the symmetric nature of the softmax function. The authors then proceed to introduce asymmetry in the softmax function and show that this leads to both bounded probability estimators and improved performance.
Strengths: 1. The paper is well-written and the idea is somewhat novel.
2. I enjoyed reading the theoretical analysis in both the main paper and the appendix.
3. I like how the method do not have a lot of additional hyper-parameters to tune — this makes the method simple and do not feel over-engineered.
Weaknesses: 1. Asymmetric softmax functions have studied before, although in a different context. For example, [1] suggests LDAM loss aimed for long-tailed or imbalance datasets. Comparison/discussion with this and possible other asymmetric variation of softmax would be appreciated.
2. The experiments involve only small-scale datasets like CIFAR-10 and CIFAR-100. While many methods perform well on these, they often do not scale well. Additional datasets would both (1) make the problem setting more appealing and (2) improve the standing of the paper.
[1] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma. Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss, NeurIPS 2019, https://arxiv.org/abs/1906.07413
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. How is the learning to defer paradigm comparable to selective classification [1]? In particular, one can integrate the reject option into the classifier.
2. In the learning to defer problem, a possible generalization is also associating a cost (possibly small) with deferring the prediction, even when the expert is correct, because using an expert instead of the model should be costly and avoided. Curious if any related work has considered this scenario.
[1] Yonatan Geifman, Ran El-Yaniv. Selective Classification for Deep Neural Networks. NeurIPS 2017
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The authors should add the comparison/discussion with [1] or any possible other asymmetric variation of softmax.**
A1. Thank you for raising this concern! In this work, the asymmetric softmax is introduced to directly map the scoring function into the desired region $\Delta^{K}\cup[0,1]$ while remaining the order of each dimension of the scoring function. In [1], the softmax function is asymmetrized to simulate the trade-off between class margins that exist in imbalanced classification tasks, which can lead to smaller generalization errors if chosen properly. Though the motivations of the two versions of asymmetric softmax are different, they are both based on prior knowledge of the data distribution. In conclusion, introducing asymmetry into the original softmax function can be a promising solution when we are seeking methods to incorporate prior knowledge about data distribution into the learning process.
**Q2. Additional datasets should be added to improve the standing and appealingness of this paper.**
A2. Thank you for your valuable suggestion! To better validate the proposed method, especially on datasets with real-world expert information, we added additional experiments on HateSpeech and ImageNet-16H datasets, which are two datasets with real-world expert annotations, and the detailed information can be found in the one-page attachment.
**Q3. The authors should compare the learning to defer paradigm with selective learning framework.**
A3. Thank you for raising this concern! Learning to defer is a framework that focuses on point-wise rejection: any point whose maximum class-posterior probability is lower than the expert’s accuracy will be rejected. In contrast, the rejection rule in selective learning framework relies on the whole distribution with density $p(x,y)$: given a coverage constraint cov%, the rejection rule should accept the top-cov% samples with the highest maximum class-posterior probability. I think the goals of the two frameworks are not contradictory but complementary, and the combination of the two paradigms can be a promising future direction.
**Q4. The authors are encouraged to give any related work that associates a cost with deferring the prediction.**
A4. Thank you for raising this concern! Associating a cost $c_{0}$ with deferral has been studied in [2, 3], though some of the recent works [4, 5] on the surrogate losses for L2D only focus on the case where $c_{0}=0$. In this case, deferring to an expert will trigger a cost of $c_{0}+[[expert~prediction\not=y]]$. We find that our proposed method can be generalized to this case where $c_{0}>0$ and we provide the detailed formulation in the answer to question 2 of reviewer jwha.
[1]. Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma. Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss, NeurIPS 2019, https://arxiv.org/abs/1906.07413
[2]. Hussein Mozannar and David A. Sontag. Consistent estimators for learning to defer to an expert. In ICML, 2020.
[3]. Harikrishna Narasimhan, Wittawat Jitkrittum, Aditya Krishna Menon, Ankit Singh Rawat, and Sanjiv Kumar. Post-hoc estimators for learning to defer to an expert. In NeurIPS, 2022.
[4]. Rajeev Verma and Eric T. Nalisnick. Calibrated learning to defer with one-vs-all classifiers. In ICML, 2022.
[5]. Hussein Mozannar, Hunter Lang, Dennis Wei, Prasanna Sattigeri, Subhro Das, and David A. Sontag. Who should predict? exact algorithms for learning to defer to humans. AISTATS 2023: 10520-10545
---
Rebuttal Comment 1.1:
Comment: The authors have answered my questions and concerns in a compact manner, I thank them! To reflect this, I have increased the score from 6 to 7.
Specifically:
1. The authors have added experimental results on two more datasets: ImageNet-16H and Hatespeech.
2. The authors have answered my question related to associating a cost with deferring the prediction.
Additional suggestion:
Similar to reviewer r3aW, I would suggest adding a few more datasets. For example, [1] uses synthetic expert data on CheXpert.
[1] Consistent Estimators for Learning to Defer to an Expert, https://arxiv.org/abs/2006.01862
---
Reply to Comment 1.1.1:
Title: Thank you for raising the score
Comment: Thanks for your recognition and suggestion. We are glad that you feel that the questions and concerns have been addressed. We will add the experiments on CheXpert and synthetic datasets in the final version. | null | null | null | null | null | null |
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks | Accept (spotlight) | Summary: The authors study a finite width correct to the Dynamical Mean Field Theory (DMFT) of finite depth neural networks in the feature learning regime. While I will be the first to admit that I am not an expert on the DMFT calculations, the authors did produce very convincing simulations capturing interesting properties of finite size networks. In particular, the edge of stability behaviour at large learning rates seems to be modelled by the finite width correction.
I hope the authors can help me clarify some questions regarding the implementation of the solver, after which I would be happy to raise my score to accept.
Strengths: The authors develop a strong theory of finite width neural networks capable of making accurate predictions.
Weaknesses: N/A
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would like the authors to provide some intuitions about how the DMFT equations are being solved numerically. I understand this may be the contents of a previous paper (Bordelon and Pehlevan, 2022), but I am having trouble understanding this part.
1. Based on my understanding, all the elements of $q$ are processes index by time $t$. So what does it mean to compute elements of $q$ or $\Sigma$? Are the authors time discretizing these processes?
2. How should I understand the role of the propagator $\Sigma$, and how the authors solve for this object first independently of the elements of $q$?
3. How should I understand the saddle point solver in Algorithm 2? Here I am just asking for what is happening within the algorithm, as I find reading the pseudocode quite difficult to understand.
I would be happy to continue this discussion during the rebuttal period, and once again I would raise my score once I find these questions addressed.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank their reviewer for their support and good questions. We hope to make our methods more understandable and self-contained in the paper. Below we provide some more explanations about how we solve our self-consistent equations.
### Response to Questions
1. This is a great question that we will address more in the work (Appendix E). First, the $\mathbf q$ vector represents the collection of the order parameters (kernels $\Phi^\ell_{\mu\nu}(t,s)$, predictions $f_\mu(t)$, response functions $A^\ell_{\mu\nu}(t,s)$, etc) at *all possible times through training*. Theoretically, for gradient flow this is an uncountably infinite set. For discrete time gradient descent, this is a countable set. In a practical numerical solution, we need to discretize time to compute it. For subtle theoretical reasons, the path integral is also initially derived using a discretization of all the dynamics in time (with the Ito convention). After integration over the initial weights, we can then take a continuous time limit to arrive at the action $S$ provided in Appendix D. Concretely we will add the following text to Appendix E
*For nonlinear deep networks, we build on the Monte-carlo approach developed by Bordelon & Pehlevan 2022 which computes the saddle point equations for all order parameters $q_\infty$. For a practical numerical algorithm, we discretize the time steps so that we store finite matrices (such as $\Phi^\ell_{\mu\nu}(t,s)$) with layer, sample, and time indices. Our approach allows us to estimate the entries of the action's Hessian $\nabla^2 S$ and ultimately invert it to obtain the propagator $\Sigma = \left[ - \nabla^2 S(q_\infty) \right]^{-1}$. To do this, we need to use sampling to estimate the fourth feature moments $\kappa$ and the sensitivity blocks $D$. We evaluate these averages over the stochastic process defined by $q_\infty$.*
For a detailed set of steps to compute $q_\infty$ and $\Sigma$, see the third bullet point below.
3. The propagator $\Sigma$ can be thought of as the covariance of the order parameters near infinite width. Its name in physics derives from the fact that fluctuations in the order parameters such as the kernel $\Phi^\ell$ at time $t$ can propagate noise to other order parameters (such as $\Phi^{\ell+1}$) at time $t' > t$. When we say we compute $q_\infty$ it means that we are solving the saddle point equations, which can be solved using the methods of Bordelon & Pehlevan 2022. **To compute $\Sigma$, we need to have already first solved for $q_\infty$**. We evaluate the second derivative of $S$ at this value of the order parameters $q_\infty$. This involves using similar methods (either monte carlo for nonlinear networks or closed form expressions for linear networks.) Again, in practice, we discretize time in a numerical algorithm to evaluate these expressions. (See below for more detail.)
4. The Algorithm pseudocode can be summarized by the following high level instructions
* Step 1: First solve the infinite width DMFT equations for $q_\infty$ which include the prediction error dynamics $\Delta_\mu(t)$, the feature kernels $\Phi^\ell_{\mu\nu}(t,s)$, gradient kernels $G^\ell_{\mu\nu}(t,s)$. This step corresponds to algorithm in Bordelon & Pehlevan 22 and defines the dynamics one would expect at infinite width.
* Step 2: Compute the entries of the Hessian of $S$ evaluated at the $q_\infty$ computed in the first step. Some of these entries look like fourth cumulants of features like $\kappa = \left< \phi(h)^4 \right> - \left< \phi(h)^2 \right>^2$ and some of them measure sensitivity of one order parameter to a perturbation in another order parameter $D^{\Phi^\ell} = \frac{\partial}{\partial \Phi^{\ell-1}} \left< \phi(h^\ell)^2 \right>$. The averages $\left< \right>$ used to calculate $\kappa$ and $D^{\Phi^\ell}$ should be performed over the infinite width stochastic processes for preactivations $h^\ell$ which are defined in equation (19).
* Step 3: After populating the entries of the block matrix for the Hesssian $\nabla^2 S$, we then calculate the propagator $\Sigma$ with a matrix inversion. Since we discretized time, this is a finite dimensional matrix.
The above text will also be added to the Appendix of our work.
4. Now we will give a detailed set of intuitions about how the infinite width limit for $q_\infty$ is solved (step 1 above). This corresponds to the algorithm of Bordelon & Pehlevan 2022 to solve the saddle point equations $\frac{\partial}{\partial q} S(q)|_{q_\infty} = 0$.
* Step 1: Start with a guess for the kernels $\Phi^\ell_{\mu\nu}(t,s), G^\ell_{\mu\nu}(t,s)$ and for the predictions through time $f_\mu(t)$. We usually use the lazy limit as an initial guess.
* Step 2: Sample gaussian sources $u^\ell_\mu(t) \sim \mathcal{GP}(0,\Phi^{\ell-1})$ and $r^\ell_\mu(t) \sim \mathcal{GP}(0,G^{\ell+1})$ based on the current estimated covariances $\Phi^\ell$ and $G^\ell$ respectively.
* Step 3: For each sample, solve integral equations for $h^\ell_\mu(t)$ and $z^\ell_\mu(t)$.
These will be samples from the single site distribution for $h^\ell, z^\ell$. In a discretization, the integrals will be replaced with sums.
* Step 4: Average over the Monte-carlo samples to produce a new estimate of the kernels, for instance $\Phi^\ell_{\mu\nu}(t,s) = \left< \phi(h^\ell_\mu(t)) \phi(h^\ell_\nu(s)) \right>$. A similar procedure is performed for $G^\ell$ and the response functions $A,B$.
* Step 5: Compute the NTK estimate $K(t) = \sum_\ell G^{\ell+1}(t,t) \Phi^\ell(t,t)$ and then integrate prediction dynamics from the dynamics of the NTK $\frac{d}{dt} f_\mu(t) = \sum_\nu K_{\mu\nu}(t) \Delta_\nu(t)$.
* Repeat steps 2-5 until the order parameters converge.
Please let us know if these explanations helped clarify the numerical methods and if there are any remaining questions or concerns.
As we mentioned in the global response, we are also going to add a more self-contained derivation of the DMFT action (similar to Bordelon & Pehlevan 2022).
---
Rebuttal Comment 1.1:
Title: Response
Comment: I apologize for the delayed response. I have been traveling and wrestling with a heavy review load this year.
I believe my concerns have been addressed, and I will raise my score to accept. | Summary: Building on past work which set up a DMFT (Dynamical Mean Field Theory) for fully connected networks in the infinite width limit (where the width of each layer tends to infinity), this paper reasons about the *fluctuations* around the infinite width limit. This is important because for finite sized neural networks, these fluctuations are large enough that they are an important part of the network training dynamics. The description of the DMFT is theoretically complicated and cannot be solved exactly, but they enable simulations which confirm that the DMFT captures the behaviour of finite sized real networks quite accurately. As an application, this DMFT theory is used to understand bias, training rates, and variance in realistic tasks and the special case of 2-layer networks (where the theory is quite a bit simpler) is investigated in more detail. This theory has the potential to open the door to many potential future uses that explain how neural networks learn.
Strengths: This paper lays out a framework for theoretical understanding of deep neural networks that incorporates the effect of finite width. This is a problem that has received a lot of attention since the original NTK/infinite width limits came out, and as far as I know, the approach here is novel and powerful. The paper is well written for the most part although familiarity with the infinite width DMFTs is assumed.
Overall, the fact that the theory and the simulations agree very well is quite impressive, and I think the ideas in the paper are quite ambitious because they can be used for almost any kind of question one might have about the theoretical evolution of the DNN. This paper has the potential to be the basis for future work which uses the theory developed here to investigate questions about how DNNs learn.
Weaknesses: The main weakness of the paper is that its a bit spread thin at times: both the theory and a few different applications are covered, but it seems like the authors were trying to make it all fit and I would have liked more detail in a few spots. This is largely due to the page limit of the submission. I personally would have found it to be a stronger paper if a single really clear example was presented in a lot of detail. (Although again, I completely understand that this is largely pressure from the conference format to try and do a lot of stuff)
The other main "weakness" of the paper (which is strictly speaking a limitation of the audience of the paper) is that to understand it, you need to be familiar with the previous DMFT on which this paper is built. The authors include a very short section called "Review of Dynamical Mean Field Theory" citing [9],[46] as a review, but this section is extremely sparse for actually understanding what is going on. I essentially had to read [9] in its entirety first to understand what was going on in this paper. (Also the reference [46] could not be found since only authors and title are given...where would one find this reference?) In my view, this weakness could be mitigated by just being more honest with the reader up front about this...for example [9],[46] should be cited at point 1 in the list of contributions to make it more clear the dependence and what is/isn't actually explained in this paper.
Another (related) "weakness" is that the paper relies quite heavily on physics technology and jargon to reach its conclusions. The fact that the results are so heavily entrenched in physics jargon like "order parameters" or "propagator" makes this paper less likely to have a broad impact on the deep learning community. The authors would add a lot of value to the work by attempting to make a "translation guide" to help people who don't have the same physics background understand what is going on in more detail.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Suggestion: Eqn (3) seems like a very important main result: a bit more discussion on the next order term and why its actually size 1/N (even though it starts with an N) would be helpful.
* I am wondering if you had any numerical simulations where you checked the rate of covergence between the theory here and simulations (which should presumably be like 1/N^2?). Something like Figure 2a but comparing to the fluctuation predictions rather than to $q_\infty$.
* One high-level question I had: It seems that if the learning rate is fixed and not scaled, the the fluctuations and the effect of the learning rate are both on the same scale $1/\sqrt{N}$. 1. Is that correct? and 2. Does your theory work to analyze what's going on in that case?
Here is a list of other minor errors/suggestions I found while reading:]
* Eqn (2): use definition equal with three lines to be consistent with definitions later on
* Line 179 vs 185: Is there a difference between $K$ and $K_\infty$? If so what is it?
* Line 188: What does the subscript 0 in "$Cov_0$" mean here?
* Section 6.1: I was able to more-or-less piece this together, but I think it would be a lot more understandable if you gave explit definitions for $K(t)$ and $K_\ast(t)$
* Line 232-235: I think it would be a lot clearer to write out the definitions here of the two new $\Delta$'s in terms of $\Delta_\mu$.
* References [15],[21],[33], [46], [53] has only author/title but not where published/where to find.
* Reference [22] missing a title?
* I would also check the arxiv only referenes e.g. [11],[13],[20],[27],[34],[57],[63] to make sure there isnt a conference or journal version that is now published.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: A potential criticism is the physics level of rigor used in Appendix C which is used to establish the main results. The manipulations carried out in the proof of Appendix C certainly seem plausible, and I believe the community as a whole is ok with this level of rigour, but the authors could be a bit more clear about what they mean by "proof" in the main paper. It is not a mathematically rigour proof (which would involve all sorts of techincal assumptions), but rather a physics-type statement that holds assuming the usual expansions can be carried out without obstructions. To reiterate: I think the actual work is fine, but they could be a bit more honest about how it is "proven" and the level of rigour in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for appreciating the strengths of our approach and its applicability to wide DNN dynamics. Below we provide some responses to the weaknesses, questions and limitations.
**Responses to Weaknesses**
We agree that the paper is spread a bit thin at times. Based on the detailed comments and questions of this reviewer, we have added more detailed definitions of the terms which appear in the main text equations. Given an additional page for a final draft, we will expand our comments on the setup of the calculation, the statement of the main results and their implications.
Based on the comments of all reviewers, we will add a more self-contained derivation of DMFT in the Appendix which will introduce the main concepts and derive the action $S$ which plays such a central role in this paper. We also will add a table, as the first reviewer suggested which translates the physics jargon for the objects into more ML theory friendly language.
We agree with the reviewer that we do not provide formally rigorous proofs of our results, but derive them at a physical level of rigor. We will add a sentence in the disussion acknowledging this limitation and will leave open for future works to provide rigorous proofs of these dynamical expansions.
**Responses to Questions**
1. This is a good question/suggestion. Indeed the raw covariance of the order parameters is size $\mathcal{O}(1/N)$. We define the propagator $\Sigma \sim N \text{Cov}(q) \sim \mathcal{O}(1)$ entries to be $N$ times this covariance so that it behaves as an $\mathcal{O}(1)$ quantity. Thus equation 7 can be solved for $\Sigma^{\Delta} \sim \mathcal{O}(1)$ once for all possible $N$. When we want to compare to simulations of a finite width $N$ network, we could multiply the empirically observed covariance by $N$ and compare to $\Sigma$. As the reviewer points out, the covariance $N \text{Cov}(q) \sim \Sigma + \mathcal{O}(N^{-1})$ is correct up to a subleading term. This is established in Appendix 3.1 and 3.3, specifically equations 15 and 17 (disregard the typo in the sentence above equation 17). We will comment on this near equation 3.
2. When we submitted the paper, we did not have simulations showing that the covariance predicted $\frac{1}{N}\Sigma$ is correct up to $\mathcal{O}(N^{-2})$. To accurately measure the deviations between the order parameter covariance and the theoretically predicted propagator $\Sigma$, we add a simulation in the attached PDF which shows that this rate is accurate. Estimating this error rate requires a very large number of neural networks (many more than to estimate $\text{Cov}(q)$) so we focus on the variance of ReLU feature and gradient kernels at initialization, where we can exactly compute $q_\infty$ and $\Sigma$.
3. On the learning rate scaling with $N$ question, we have a few comments. First, if the reviewer is asking about the learning rate in the gradient flow $\frac{d}{dt}\theta = N \gamma^2 \nabla \mathcal{L}$, if the factor of $N$ is removed it will take a time $t \sim \mathcal{O}(N)$ to make progress on the training loss, but in gradient flow this is just a rescaling of the time axis in all our plots (so our theory still applies). What happens if other parameters (such as feature learning rate $\gamma$ or discrete time step size $\eta$) depend on $N$? First, what if $\gamma$ is depenent on width $N$? In the classic NTK parameterization where $\gamma = \frac{1}{\sqrt N}$, we can still compute a predicted dynamics for the kernels and fluctuations. First, each finite width $N$ network has a corresponding infinite width (feature learning) network with order parameters $q_{\infty}(\gamma = \frac{1}{\sqrt N})$. Second, the propagator can be computed as $\Sigma(\gamma= \frac{1}{\sqrt N})$, in both cases evaluating the dynamics at feature learning velocity $\gamma$ which depends on the width. The predicted covariance would be $\text{Cov}(q) \sim \frac{1}{N} \Sigma(\gamma =\frac{1}{\sqrt N})$. This makes understanding the effect of width possible but more complicated in NTK parameterization since decreasing $N$ has two effects (increase feature learning and fluctuation variance). In mean field parameterization/ $\mu P$, the feature learning parameter $\gamma$ is fixed with width so that the same $q_{\infty}(\gamma)$ and $\Sigma(\gamma)$ can be used to estimate finite width effects for different widths $N$ simply by multiplying by $1/N$. What if the raw learning rate $\eta$ in discrete time is scaled differently with $N$? This will also change the dynamics, but it will lead to a badly behaved $q_\infty$. For instance, if $\eta$ is rescaled by $1/\sqrt{N}$, the neural network will not fit the data in finite time at infinite width and if $\eta$ is multiplied by a higher power of width $N$, the dynamics will become unstable in discrete time. We add a comment on this in section 4.
**Minor Comments**
1. We will be sure to use $\equiv$ when we define new terms (such as in Eq 2).
2. Yes, there is a difference between $K$ and $K_\infty$. The $K$ should be thought of as the random finite width NTK while $K_\infty$ is the infinite width kernel. We will add a sentence clarifying this in this section.
3. In this section by $\text{Cov}_0$, we meant the leading order covariance (neglecting $1/N^2$ and smaller terms). We will remove this notation and simply explain that we are computing the asymptotic covariance.
4. We will add an explicit definition of $K$ and $K_\star$ which represent the train-train NTK and the train-test NTK respectively.
5. We will explicitly define $\Delta_y$ and $\Delta_{\perp}$. In words, $\Delta_y$ is the projection of the vector $\mathbf \Delta$ on the label direction $\mathbf y$ and $\Delta_{\perp}$ is the projection on orthogonal directions.
6. We thank the reviewer for catching the issues with the references we will go through them and add the most recent (journal versions) citations for the papers. | Summary: The paper addresses the problem of analytical description of the rich (feature learning) dynamics of neural networks. To achieve this, the authors use previously introduced dynamical mean field theory (DMFT), which identifies several key characteristics of the problem - order parameters - defines their probability distribution throughout the training using a path integral representation. The paper considers $\mu P$ parametrization of the network, which is known to display feature learning behavior at infinite width (in contrast to popular NTK and Standard parametrizations) with feature learning strength controlled by parameter $\gamma$.
In the paper, the authors focus on leading width $O(N^{-1})$ corrections to the infinite width limit. Technically, this is achieved by taking into account quadratic (Gaussian) fluctuation around saddle point $\mathbf{q}_\infty$ of DMFT action $S(\mathbf{q})$. After deriving general equations governing $O(N^{-1})$ corrections, the authors consider a number of simplified scenarios where these equations can be solved analytically. Also, the authors validate qualitative conclusions of their theory in the non-synthetic experiments with CNN trained on CIFAR10.
Strengths: The approach chosen by the authors - identifying saddle point $\mathbf{q}_\infty$ of the system's action $S(\mathbf{q})$ and investigating it together with Gaussian fluctuation around $\mathbf{q}_\infty$ - is a workhorse of various branches of theoretical physics where behavior of a complex system needs to be analyzed. In physics, this approach not only provides a very sizable portion of available analytical results but, for example, also provides SOTA results for numerical modeling of realistic strongly-correlated materials [Kotliar 2004](https://pubs.aip.org/physicstoday/article-abstract/57/3/53/755526/Strongly-Correlated-Materials-Insights-From?redirectedFrom=fulltext), [Vollhardt 2019](https://arxiv.org/abs/1910.12650). Thus, realizing this general strategy is one of the fundamental directions within deep learning theory. Also, as noted by the authors, the DMFT approach is non-perturbative w.r.t. feature learning strength - a feature that is mostly absent in other approaches to NN dynamics away from kernel regime.
Weaknesses: * In many cases, solving DMFT equations analytically seems to be intractable. This significantly renders the main purpose of the proposed theory - obtaining analytical insights into the network dynamics.
* The DMFT equations are bulky, which could make working with them quite exhaustive.
* I believe it is hard to understand main DMFT ingredients - order parameters $\mathbf{q}$ and their action $S(\mathbf{q})$ - from the current paper alone. Most probably, a careful reading of the original DMFT paper [9] is required to understand this paper. However, this is not the author's fault but rather a consequence of the chosen approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What dataset size $P$ were used for main CIFAR10 experiment of fig. 10? Due to mentioned $O(P^4T^4)$ memory requirements of storing the propagators, it seems to be impossible to work with the full ($P=50000$) CIFAR10 size. Also, what is the computational complexity of solving DMFT equations? Both for saddle point $\mathbf{q}_\infty$ and $O(N^{-1})$ corrections if they are different.
* While in most of the presented plots DMFT accurately describes the experiments, for the big sample sizes (fig. 3) there is a significant disagreement between theory and experiment. Do you think this a fundamental limitation DMFT on the level of leading order correction (ignoring $O(N^{-2})$ and beyond)? Or maybe it is because in the experiment you considered whitened data, whereas in the more realistic scenarios, the data typically has a low effective dimension (e.g. measured by the decay of data covariance eigenvalues)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors discuss the limitations of their current approach. The limitations mainly come from 1) the inability to numerically solve DMFT equations for large-scale problems (e.g. due to $O(P^4 T^4)$ propagator size) and 2) the need to consider higher order expansion around the saddle point $\mathbf{q}_\infty$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and supportive comments. Below we address the weaknesses pointed out and attempt to answer the reviewer's questions.
**Response to Weaknesses**
The reviewer is correct to point out that the DMFT equations are difficult to solve numerically and that the equations are bulky in the general case. We acknowledge these issues in the limitations section and will stress them further in the updated draft. Despite their complexity, we think it is still useful to derive the equations in full generality to show that, in principle, the finite size corrections to DMFT can be computed with similar methods to compute the infinite limit.
That being said, we think that a good deal of intuition can be gained by studying the special cases that are tractable numerically to analyze how various aspects of the problem (feature learning, sample size, depth etc) alter the dynamics and fluctuations.
Since some of the reviewers point out that we use physics jargon without comparison to ML terminology to describe some of the objects that appear in our study, we will add a table (see attached PDF in global response) which compares the terms (order parameter, propagator, action etc) which show up in the calculation to terms more familiar to a ML audience (concentrating variable, asymptotic covariance, log density, etc).
We will also add a short self-contained primer and derivation of the DMFT action in the Appendix so that readers do not need to refer to [9] to understand the present paper.
**Response to Questions**
1. We apologize for not being clear about this section. The CIFAR-10 plots are purely empirical. In these plots, we do not attempt solving the DMFT equations or the propagator for this setting as both timesteps and samples are prohibitively large. The purpose of this section was to illustrate that the qualitative findings from the simpler analytically tractable models carry over in more realistic cases, specifically the accumulation of finite size effects over training and that the finite size effects depend on a low degree polynomial in $1/N$. We will be sure to clarify this in the draft.
2. This is a great question. We are mainly showing in this toy example (which has a very simplified whitened data model) that the leading order corrections can be accurate for some problem settings ($P<N$ in this problem) but can underestimate finite size effects in other settings ($P>N$). We suspect that in this example, higher order corrections (like $P^2/N^2, ...$ etc) may be necessary to accurately capture it. We agree with the reviewer's intuition about the correlations in natural data could be reducing the scale of finite size noise in more realistic settings, making finite size networks closer to the infinite width limit. For instance, if the data matrix were low rank with rank $P_{eff} < P$ then the finite width effect in this two-layer, linear network example is actually $P_{eff} / N$. We will add a proof of this and discuss it in the main text. We leave open for future works how the realistic structure of natural data alters the scale of finite width corrections. | null | null | Rebuttal 1:
Rebuttal: We thank all of the reviewers for their detailed reading and comments. We appreciate the general support for this paper and the comments on the paper's strengths and weaknesses. Many concerns were shared among reviewers which has caused us to make the following updates to the paper
1. We spend more space defining all of the mathematical terms which appear in our paper and relate the physics terminology with more standard machine learning terminology. A table summarizing the map between our terminology and more standard terms is provided in the PDF. This will hopefully reduce the obscurity of our writing.
2. We provided a new experiment (for ReLU kernels at initialization) that the asymptotic covariance $\frac{1}{N} \Sigma$ predicted by our theory is accurate up to order $1/N^2$ (see attached PDF). This provides additional support for our approach, as we can characterize the error of our theoretical covariance predictions (though empirical estimates of the covariance error require simulating a very large number of networks).
3. We are now spending more space to define each of the terms (like $K_\infty, K, K_\star, f, f_\star, \Delta_y, \Delta_{\perp}$, etc) which appear in our equations. This will hopefully make the paper easier to parse and allow the reader to more easily interpret our results.
4. We will provide a self-contained derivation of the DMFT action $S$ in the Appendix so that we do not force the reviewer to read [9] Bordelon & Pehlevan 2022. We will also show how one can use $S$ to find the saddle point equations for $q_\infty$.
5. We expand in the Appendix our section which explains how to numerically solve the self-consistent equations for the saddle point $q_\infty$ and the propagator $\Sigma$ (see response to reviewer fkFE).
6. We will acknowledge in the limitations section that our paper operates at the level of rigor of a physics calculation rather than a fully rigorous proof which would need several additional assumptions to make the expansion properly defined.
7. We will fix all of the issues with the citations to make sure they are up to date and contain the appropriate journal and update paper citations to Arxiv preprints which are now published.
8. We will clarify that the CIFAR-10 experiments are purely experimental to see whether the dynamics in a more realistic setting has qualitatively similar dynamics to our solveable examples.
Overall, we aim to make the paper more understandable and readable. With the additional page, we will be able to expand the writing and exposition in the main text.
Pdf: /pdf/2fbc4740dbde9f7fef8a863a94ac8ca8e7da6e24.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Reduced Policy Optimization for Continuous Control with Hard Constraints | Accept (poster) | Summary: This paper proposes RPO to handle general hard constraints (equality and/or inequality constraints per step) for RL. The framework consists of construction and projection stages with a penalty loss for end-to-end training. Finally, the authors validate the effectiveness of their approach on 3 test benchmarks with hard constraints and compare them to previous CMDP-based approaches.
Strengths: This paper studies safe RL with hard constraints which is an important problem. It is well-motivated and well-organized. The experiment results are promising as the proposed approach shows advantages against the CMDP baselines. This paper is novel as the first attempt to introduce GRG to RL to solve the hard equality and inequality constraints.
Weaknesses: There are several weak points of this paper for the reviewer.
1. This paper misses the recent literature addressing RL with hard safety constraints, such as
Wang, Y., Zhan, S. S., Jiao, R., Wang, Z., Jin, W., Yang, Z., ... & Zhu, Q. (2022). Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments. arXiv preprint arXiv:2209.15090.
This recent ICML paper also claims they are dealing with hard safety constraints for RL.
2. For the reviewer, the two stages proposed by the authors work like a shielding to modify the action generated by the policy network to satisfy the hard constraints. The reviewer would like to know how the authors compare their approach to the literature on RL + shielding, such as
Bastani, Osbert, Shuo Li, and Anton Xu. "Safe Reinforcement Learning via Statistical Model Predictive Shielding." Robotics: Science and Systems. 2021.
3. The MDP formulated in this paper assumes a general stochastic environment, this could raise a problem - "how could you let a random variable (s_t, a_t) be less than 0" for Eq(3), I feel it should be within the probability format, e.g., "Pr(g <= 0) = 1". The tested examples are all deterministic environments as continuous tasks. Therefore, the authors may want to reconsider how to formulate their safe RL problem.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What are the soft constraints mentioned in the introduction?
2. The reviewer cannot agree that "it is worth noting that there is currently a lack of RL environments with complex hard constraints". The reviewer feels you can always define your own constraints either hard or soft as long as you have the environment model and full observability. The reviewer acknowledges the efforts to develop test examples but cannot agree those three can be claimed as "benchmarks"
3. Even if the final action satisfies the constraints, how should I know it is because of the two-stage procedure or the training objective with penalty loss?
4. How to enforce the policy network output basic actions?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have clearly presented the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her careful reading and valuable suggestions. Below we will answer your concerns point-by-point.
> **Q1**: This paper misses the recent literature [R5], which also deals with hard safety constraints for RL.
**A1**: Thank you for your valuable suggestion. We have carefully studied [R5] and will cite it in our final version. However, the studied problem in [R5] is different from that in our works. As the authors illustrated in this paper, they attempt to solve the chance-constrained RL problem, which has the form:
$$
\begin{array}{c}
\max_{\theta} J\left(\pi_{\theta}\right), \\\\
\text{ s.t. }P\left(s(t) \notin S_{u}\mid\pi_{\theta}, s(0)\right)\geq 1-\eta,\forall t \in[0, T],\forall s(0)\in S_{0},
\end{array}
$$
while the problem we studied is:
$$
\begin{aligned}
\max_\theta \ &J_R(\pi_\theta) \\\\
\text{ s.t. }\ &f_{i}(\pi_\theta(s_t);s_t)=0\quad\forall i, t, \\\\
&g_{j}(\pi_\theta(s_t); s_t)\leq 0\quad\forall j, t,
\end{aligned}
$$
Although [R5] can satisfy the safety chance constraints, the safety chance constraints themselves may still provide a soft guarantee for the satisfaction of the actual constraints.
[R5] Wang, Yixuan, et al. "Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments." ICML 2023.
> **Q2**: The reviewer would like to know how the authors compare their approach to the literature on RL + shielding, such as [R6].
**A2**: Thank you for raising the concern. We think the RL + shielding method described in [R6] also differs from our RPO. There are at least two different points. Firstly, the RL + shielding method in [R6] does not perform direct modification on the action generated by the policy network like our RPO but maintains two policies. One is for maximizing the cumulative reward, and another backup policy will be switched to when the potential violation exists. We think such methods should be classified as the methods based on Recovery like [40], which we have mentioned in line 90.
Secondly, the problem studied by [R6], is also different from our RPO. RPO aims to solve hard instantaneous constraints, which are commonly very tight and exist in all states. For example, we must always satisfy the power balance equations in smart grid control. However, the RL + shielding method in this paper can only deal with the long-term soft constraints, which are not very tight, and violations may only exist in a few states. Hence, it is hard for this kind of method to solve RL with the hard constraints we studied.
[R6] Bastani, O., Li, S., & Xu, A. (2021). Safe Reinforcement Learning via Statistical Model Predictive Shielding. In Robotics: Science and Systems (pp. 1-13).
> **Q3**: The MDP formulated in this paper assumes a general stochastic environment; this could raise a problem - "how could you let a random variable (s_t, a_t) be less than 0" for Eq(3), I feel it should be within the probability format. Therefore, the authors may want to reconsider how to formulate their safe RL problem.
**A3**: Thank you for raising the concern. As we explained in **A1, A2**, our RPO never aims to solve the Safe RL problem but attempts to solve MDP with hard constraints. In contrast to Safe RL methods, RPO focuses on decision problems with explicit instantaneous constraints like smart grid control. The difficulties of our studied problems are mainly from the tightness of the equality constraints and the requirement to maintain the satisfaction of both equality and inequality constraints rather than the randomness in constraints. For example, if the power balance constraints in the smart grid are modelled in probability form, it may lead to power unbalance in some states, and that will ruin the whole electrical grid.
> **Q4**: What are the soft constraints mentioned in the introduction?
**A4**: The soft constraints we mentioned are the constraints that are merely probably satisfied or satisfied with the expectation form. These kinds of constraints are more like multiple objectives, such as resource limitations. The soft constraint is different from the hard one. In some real-world applications like the smart grid, the hard constraints indicate the feasibility, which must be satisfied with high precision.
> **Q5**: The reviewer cannot agree that "... there is currently a lack of RL environments with complex hard constraints". The reviewer acknowledges the efforts to develop test examples but cannot agree those three can be claimed as "benchmarks."
**A5**: Thank you for raising the concern. More precisely, most existing benchmarks only consider inequality constraints, lacking RL environments with both hard equality and inequality constraints at present. Besides, the general interface needs to be carefully designed that delivers explicit information on constraints to the RL agent. Hence, there is currently a lack of RL environments with complex hard constraints. That's why we believe that our three environments can be claimed as "benchmarks" for the further study of RL with hard constraints.
> **Q6**: Even if the final action satisfies the constraints, how should I know it is because of the two-stage procedure or the training objective with penalty loss?
**A6**: Thank you for raising the concern. As lines 258-260, SAC-L and DDPG-L are the versions of RPO without the two-stage procedure. You can find in Figure 3 and Table 2 that their performance of them is much poorer than RPO-SAC and RPO-DDPG.
> **Q7**: How to enforce the policy network output basic actions?
**A7**: The actions are divided into basic and nonbasic actions before the training, and the division is not changed after that. Therefore, we do not need to enforce the policy network output basic actions but define the outputs of the policy network are the basic actions. More details on division can be referred to in our response **A1** to reviewer ixcf and **Global Response A2**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttals, which address most of my questions.
However, at least there is a formal math writing problem.
it is not clear to me that whether the authors are solving the safe RL problem in a general continuous and stochastic MDP (line 119 - 130) or a deterministic continuous environment where you can write (f(s_t, a_t) = 0, g(s_t, a_t) <= 0). Please keep in mind that line 119 - 130 essentially assumes that s_t, a_t are random variables, to compare a random variable to a number (e.g., 0), you have to either use Expectation operator or Probability operator.
My understanding is that this paper only deals with deterministic environment (because you want to write equality and inequality constraints directly on state and action without E[] and P()), you should have problem formulated as
s_{t+1} = f(s_t, a_t), f is unknown and continuous, rather than the transition probability function P : S × A × S → [0, 1]
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thanks for your valuable suggestion as well. We are glad that our response helps you solve most of your questions and will present a mathematical formulation of MDP with hard constraints more formally in the final version. Here are some further interpretations for your concerns.
First, we need to clarify that we are studying RL with hard deterministic constraints in a general continuous and stochastic MDP. To further illustrate the properties of such a problem and present a formal mathematical formulation, we will show the differences between the standard Constrained MDP (CMDP) which is often studied by Safe RL methods, and our targeted problem --- MDP with hard constraints.
- CMDP [RR1] introduces an extra function $C: S\times A\times S \rightarrow \mathbb{R}$ based on MDP (line 119-130). The constraints are viewed as the form of discounted cumulative constraints, which are defined as $J_{C_{i}}(\pi)=\mathbb{E}\_{\tau \sim \pi} \left[\sum\_{t=0}^{\infty} \gamma^{t} C_i\left(s_{t}, a_{t}, s_{t+1}\right)\right]$. We denote the set of feasible stationary policies for a CMDP problem as $\Pi_{C} = \left\\{\pi \in \Pi: \forall i, J_{C_{i}}(\pi) \leq d_{i}\right\\}$. The goal of safe RL is to find an optimal policy $\pi$ that maximizes the discounted cumulative reward in the set of feasible stationary policies $\Pi_{C}$.
- **MDP with hard (deterministic) constraints** incorporates hard equality constraints and inequality constraints into the standard MDP. Concretely, the set of stationary policies that satisfy the hard equality constraints and inequality constraints are denoted as $\Pi_{F} =\left\\{\pi \in \Pi: \forall i, \forall s, f_{i}\left(\pi(s); s\right)=0\right\\}$ and $\Pi_{G} =\left\\{\pi \in \Pi: \forall j, \forall s, g_{j}\left(\pi(s); s\right) \leq 0\right\\}$, respectively. **Notably, here $f_i$ and $g_j$ are the hard deterministic constraint functions while $s_t$ and $a_t$ still involve randomness.**
You can find that our formulation of MDP with hard constraints is actually a special case of CMDP and it focuses on the hard instantaneous and deterministic constraints. Actually, Our test environment of OPF with battery energy storage has a Gaussian perturbation in its transition function, and the results on it indicate that RPO has the capability to deal with the randomness in transition.
In other words, the deterministic part in our problem formulation is only the constraint function and does not cover the transition function.
[RR1] Altman, E. (1999). Constrained Markov decision processes (Vol. 7). CRC press. | Summary: Inspired by the GRG algorithm, this paper proposed a new reduced policy optimization (RPO) algorithm to handle hard equality and inequality constraints that must be satisfied by any learned policies for continuous control. The algorithm consists of two separate phases. Phase 1 involves the training of the policy network and the equation solver for tackling the equality constraints. Phase 2 further utilizes GRG to tackle the remaining inequality constraints without violating any equality constraints. Experiments conducted on three benchmark problems show that RPO allows the trained policies to achieve high cumulative rewards while minimizing the potential chances of constraint violation, compared to several baseline algorithms.
Strengths: It is important to develop new learning algorithms that can handle a variety of hard constraints requested for solving a reinforcement learning problem. This paper proposed an interesting new algorithm design towards achieving this goal. The validity of some aspects of the new algorithm design is also supported by some theoretical results. Meanwhile, the experiment results look promising.
Weaknesses: I have some concerns regarding the novelty of the constrained reinforcement learning problems studied in this paper. Past research works often studied constraints regarding the expected behaviors of the trained policy across a full episode. In other words, the constraints are defined over a long sequence of states and actions where all state-action pairs in the sequence are interdependent. In contrast, this paper considered constraints that can be independently defined for every state-action pair. While this consideration certainly has its practical value, however the RPO algorithm developed in this paper may not be directly comparable to those algorithms proposed for solving the time correlated constraints rather than the time independent constraints. Hence, the real technical contribution of RPO for handling multiple equality and inequality constraints may need to be more precisely described and clearly justified.
The experimental evaluation was performed on three relatively simple reinforcement learning problems. To fully understand the technical advantages of the new RPO algorithm over existing approaches, it may be important to conduct further evaluations of the RPO algorithm on more (and perhaps more challenging) benchmark problems. For example, the review of existing hard-constrained reinforcement learning algorithms in Section 2 seems to suggest a few additional benchmark problems that may need to be considered in this paper too.
Other than GRG, there are a variety of ways to handle time independent constraints on state-action pairs. While this paper introduced the basic idea of GRG clearly, the importance of using GRG over other competing constraint handling or constrained optimization methods may need to be further clarified and justified. Additional theoretical and empirical evaluations may be helpful to clearly reveal the technical novelty of using GRG for constrained policy optimization.
This paper seems to assume that equality constraints can be easily satisfied by separating all dimensions of the action space into basic dimensions and nonbasic dimensions. Upon fixing the basic dimensions, it is feasible to find suitable values for the nonbasic dimensions to satisfy all equality constraints. This assumption may be debatable and may need to be further investigated. Particularly, it is not clear to me how to systematically divide the full action space into the basic and nonbasic subspaces. Will any arbitrary division work? What will be the impact of any division on the performance of the learned policy? Furthermore, given that the equality constraints can be highly sophisticated and nonlinear, it may not always be possible for the equation solver to find feasible action that can satisfy all equality constraints, subject to the division used.
What seems also questionable regarding the new algorithm design is the necessity of using Lagrangian penalties in the loss function for training policy networks. I understand that this may make the projection stage more efficient and reliable. However, the penalties do not seem compulsory and are subject to the complexity of the inequality constraints. For the benchmark problems studied in the experiments, it seems that the inequality constraints are not hard to satisfy. The authors mentioned some existing approaches that can satisfy the inequality constraints directly. In view of this, the necessity of using Lagrangian penalties in the loss function should be further studied, both theoretically and experimentally. Meanwhile, the authors may need to evaluate their RPO algorithm on benchmark problems with more challenging inequality constraints, in order to better demonstrate its strong constraint handling capabilities.
Besides the above, according to my understanding, the projection stage is treated as part of the learning environment. Does this stage depend fully on the action output from the construction stage? If the projection stage is random or time-varying, the stability of the policy network training process may be affected. Furthermore, the reparameterization trick requires propagating the gradients back from the action space to the trainable parameters in the policy network. The effectiveness of this trick may be affected since the projection stage may not be easily supported by the gradient calculation map. On the other hand, if the learning environment is re-defined such that the projection stage is treated as part of the environment, this trick may also affect the effectiveness and reliability of the policy network training process and may need to be further examined.
Since the benchmark problems studied in this paper are new, please give a more detailed explanation of each problem in this paper. For example, the authors mentioned that the equality constraint of the Spring Pendulum problem is state-dependent. However, the actual dependence relationship is not presented, making it hard to understand how the equality constraint is defined and whether it is difficult to satisfy the equality constraint.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the real technical contribution of RPO for handling multiple time independent equality and inequality constraints?
How to systematically divide the full action space into the basic and nonbasic subspaces? Will any arbitrary division work? What will be the impact of any division on the performance of the learned policy?
When is it important and necessary to use Lagrangian penalties in the loss function for training policy networks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I do not have any concerns regarding this question.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thorough reviews to help us improve the quality of our work. We will answer all the questions that you concern about.
> **Q1**: What is the real technical contribution of RPO?
**A1**: Thank you for your constructive suggestion. The real technical contribution of RPO is to handle both hard instantaneous (time-independent) equality and inequality constraints in a general RL paradigm. It's worth noting that past research works in constrained RL cannot handle such hard instantaneous constraints. The properties of RPO and the differences between RPO and other methods have been summarized in Table 1. In the final version, we will present the technical contribution of RPO more precisely.
> **Q2**: The experimental evaluation was performed on three relatively simple problems ... additional benchmark problems may be needed.
**A2**: We respectively disagree with this comment that our test problems are relatively simple. Please refer to the **Global Response A1**.
> **Q3**: The importance of using GRG over other competing constraint handling or constrained optimization methods may need to be further clarified and justified. Additional theoretical and empirical evaluations may be added for GRG.
**A3**: Thank you for your constructive suggestion. As we illustrated in **A1**, RPO utilizing the GRG method can handle both general hard instantaneous equality and inequality constraints, while previous research approaches cannot handle this problem.
With respect to additional empirical evaluation, although there are various RL methods for time-independent constraints, most of them only consider inequality constraints. Hence, they will fail in our benchmarks with both equality constraints and inequality constraints like Safety Layer [14]. This has been shown in our experiments in Table 2 and Figure 3.
With respect to additional theoretical evaluation, we think the constraint satisfaction of RPO is from GRG updates post-plugged in the policy network, while previous constrained RL methods directly use actions from the policy network and enforce the constraint satisfaction via optimization tricks during training. Hence, the theoretical guarantee of RPO on constraint satisfaction is obvious since the convergence of GRG and Newton's method has been proved in many classical optimization books [25]. Instead, we have presented two theorems to show the property of our RPO, including the GRG update executed in the tangent space and the exact penalty theorem for inequality constraints.
> **Q4**: How to divide the full action space into the basic and nonbasic subspaces? What will be the impact of division?
**A4**: Thank you for raising the concern. If you meet up with cases where there exists an equality constraint that is not related to the full action space. The systematical way can be referred to as our **Global Response A2**.
> **Q5(1)**: Some existing approaches can satisfy the inequality constraints directly; the necessity of using Lagrangian penalties in the loss function should be studied. When is it important and necessary to use this penalties?
**A5(1)**: Thank you for your valuable suggestion. However, we cannot agree with you that the Lagrangian penalties are unnecessary. Here are two reasons. Firstly, if we remove the exact penalty term in the loss function, the constraint violation of $\tilde{a}$ will be very large, which will lead to thousands of iterations in the projection stage. It is obviously unacceptable in real-world applications. Secondly, if there exists a large gap between the initial action $\tilde{a}$ and the final action $a$, training without the Lagrangian penalties will be unstable.
In addition, while we mentioned some existing approaches that can satisfy the inequality constraints directly, most of them cannot deal with equality constraints, like Safety Layer [14], as we showed in Table 2 and Figure 3.
> **Q5(2)**: It seems that the inequality constraints are not hard to satisfy for the benchmark problems. Require challenging inequality constraints ...
**A5(2)**: Please refer to our **Global Response A1(2)**.
> **Q6(1)**: Does the projection stage depend fully on the action output from the construction stage? Is the projection stage random or time-varying?
**A6(1)**: Yes, the projection stage depends fully on the action output from the construction stage without any extra randomness. However, during the training process, this output action of the given state changes, which may lead to a changing number of iterations in the projection stage. Hence, we do not let the projection stage participate in the backpropagation, as shown in Equation 8.
> **Q6(2)**: Treating the projection stage as part of learning may affect the effectiveness and reliability of the policy network.
**A6(2)**: The projection stage is not used as a part of the learning environment. Instead, as shown in Algorithm 1, it is used in the TD target since the action output by the projection stage is the actual behaviour policy. As we analyzed in lines 237-247, that's why we chose off-policy RL methods as the base of RPO to ensure the stability of the training.
> **Q7**: A more detailed explanation of each problem. Why the equality constraint of the Spring Pendulum is state-dependent?
**A7**: Thank you for your constructive suggestion. We will add a more detailed explanation of our benchmark problems in the final version. Below we will explain why the equality constraints in Spring Pendulum are state-dependent. As shown in Appendix D.2, the equality constraint of the spring pendulum is $\dot{l}+\ddot{l}dt=0$, where $\dot{l}$ is the stretching velocity of the spring and $\ddot{l}$ is the acceleration of the spring. According to Euler-Lagrange equations, we have
$$
\ddot{l}=\frac{f_{s}+m l \theta^{2}-k\left(l-l_{0}\right)-mg\cos\theta}{m}.
$$
This means the equality constraint $\dot{l}+\ddot{l}dt=0$ is related to the state variables: the cosine and sine of the angle.
---
Rebuttal 2:
Title: Thank the authors for their response
Comment: I would like to thank the authors for their response to my concerns. The response addressed some of my concerns. I still don't quite understand why it is very important to handle time-independent constraints for many real-world applications. Furthermore, if the projection stage is effective at handling constraints, I don't understand why we should worry about the Lagrangian penalties while training the policy networks. This seems to suggest that the projection stage may fail to satisfy the time-independent constraints of the problem being solved. I guess more empirical and theoretical analysis regarding this issue may be helpful. However, I am more clearer about the technical details based on the authors' response and will raise my recommendation a bit.
---
Rebuttal Comment 2.1:
Title: Response
Comment: Thank you for the careful and thorough reviews. We are glad that our response addressed some of your concerns about the technical details. We will provide further clarification for your concerns below and add corresponding analysis in our final version.
**(1)**: Why it is very important to handle time-independent constraints for many real-world applications?
Here we need to clarify 2 points:
- **Some real-world applications do have such hard time-independent (instantaneous) constraints.** For example, the time-independent power flow constraints and voltage-bound constraints in the smart grid should be fulfilled at each time slot. If these constraints are not satisfied exactly, we cannot deploy RL method there. Although existing Safe RL algorithms claim they can handle both time-correlated and time-independent constraints, they just satisfy the constraints with certain probability or with expectation form. This means that they are not applicable to the applications mentioned above; our experiments shown in Figure 3 and Table 2 also confirm that.
Therefore, it is important to design RPO, which is customized for handling hard instantaneous constraints in some applications.
- Our RPO can be viewed as **a supplement to Safe RL algorithms**, since there is no contradiction to using RPO and one of most Safe RL algorithms like CPO at the same time. If we meet up with a real-world application with both hard instantaneous and soft cumulative (time-correlated) constraints, **we can use RPO to handle the hard instantaneous constraints and meanwhile apply some existing Safe RL method to handle soft cumulative constraints.**
**(2)**: Why do we need to use the projection stage and Lagrangian penalties at the same time when dealing with inequality constraints?
We would like to clarify that handling hard inequality constraints exactly in any state is challenging, especially together with hard equality constraints. Hence, it is not enough to use only either the projection stage or Lagrangian penalties. Here are concrete reasons from two aspects.
- If we only use the projection stage to handle inequality constraints, the training process will be **unstable and inefficient**. Using Lagrangian penalties in the loss can generate "good" initial actions for the projection stage.
- If we only use the Lagrangian penalties to handle inequality constraints, it cannot satisfy the inequality constraints exactly for **the approximation and generalization error of policy network**. As shown in Table 3, the inequality satisfaction of SAC-L and DDPG-L is poorer than RPO-SAC and RPO-DDPG. The projection stage can further reduce the constraint violation.
Therefore, we need to use both the projection stage and Lagrangian penalties at the same time when dealing with inequality constraints. | Summary: The paper solves the RL problem with equality and inequality hard constraints with a reduced policy optimization (RPO) algorithm, which combines RL with the generalized reduced gradient (GRG) algorithm. RPO partitions actions into basic actions and nonbasic actions following the GRG method, outputs the basic actions using a policy network, and calculates the nonbasic actions using domain knowledge to satisfy equality constraints. Then, the actions go through a projection stage to handle inequality constraints. Experimental results show that the proposed algorithm behaves better than the baselines for MDP with hard constraints.
Strengths: Considering both equality and inequality constraints in RL is an interesting problem. The proposed algorithm can handle the problem well. Also, the paper introduced some new RL benchmarks with hard constraints.
Weaknesses: 1. In Equation (4), it seems that there is an implicit assumption that $J^F_{\:,m\:n}$ is invertible. However, this may not always be true. I think in some environments, the equality constraints may even do not depend on the actions but only the states. For example, if we want a car to always stay in a pre-defined path, then $F$ is only a function of states, and $\frac{\partial F}{\partial a}=0$. The proposed approach seems cannot solve this problem.
2. In the experiments, the equality constraints can be solved by constructions, e.g., in Figure 2(a), let $f_2=\frac{f_1 \cos\theta_1}{\cos\theta_2}$, where $\theta_1$/$\theta_2$ are the angles between $f_1$/$f_2$ and the vertical direction. It is unclear to me why we need to satisfy the equality constraints using 4.1 instead of just satisfying them by construction since the domain knowledge required by 4.1 and by construction are the same.
3. It seems that there is a mismatch between Figure 3 and Table 2, where the constraint violation of RPO methods are less than $10^{-4}$ in the first two environments in Table 2, but can be larger than $10^{-4}$ in Figure 3. It is also not clear about the meaning of “max violation”.
4. There are typos even in the mathematical parts of the paper. For example, line 161 $a^N\in\mathbb{R}^m$ -> $a^N\in\mathbb{R}^{n-m}$; line 173 $J^F\in\mathbb{R}^{(m-n)\times n}$ -> $J^F\in\mathbb{R}^{(n-m)\times n}$.
5. Constraint violation still exists in the experiments, and there is a lack of discussion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Do you assume that $J_{:,m:n}^F$ is invertible in Equation 4? How can we handle the case when $F$ is only a function of states?
2. Why do we need to satisfy the equality constraints using 4.1 instead of just satisfying them by construction since the domain knowledge required by 4.1 and by construction are the same (the second point in Weaknesses)?
3. Why is there a mismatch between Figure 3 and Table 2 (the third point in Weaknesses)?
4. Can the authors give an analysis of the reason why constraint violation still exists in the experiments?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors address the limitations well in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback on our paper. Here we address your detailed questions as below:
> **Q1**: In Equation (4), it seems that there is an implicit assumption that $J^F_{:, m:n}$ is invertible. However, this may not always be true.
**A1**: Thank you for raising the concern. We will analyze the invertibility of $J^F_{:, m:n}$ in two situations: linear equality constraints and nonlinear equality constraints.
In the linear case, if $J^F$ is not invertible, it means there exist redundant equality constraints, then we can directly delete these redundant equality constraints and redefine the dimension of basic actions to make $J^F$ invertible. For example, if we have 3 linear equality constraints $Ax+b=0$ on 4 actions, where
$$
\left[A\left|b\right.\right]=\left(
\begin{array}{cccc|c}
1 & 0 & -2 & 3 & 2\\\\
5 & -3 & 1 & 4 & -1\\\\
4 & -3 & 3 & 1 & -3
\end{array}\right).
$$
Here we choose $a_1$ as the basic action, and then $J^F_{:, m:n}=A_{:, 1:4}$ is not invertible, since the third equality constraint can be represented by another two constraints. Hence, we can delete this redundant constraint and the new equality constraints are $\tilde{A} + \tilde{b} = 0$, where
$$
\left[\tilde{A}\left|\tilde{b}\right.\right]=\left(
\begin{array}{cccc|c}
1 & 0 & -2 & 3 & 2\\\\
5 & -3 & 1 & 4 & -1
\end{array}\right).
$$
Then, the new basic actions will be $(a_1, a_2)$ and new $J^F_{:, m:n}=\tilde{A}_{:, 2:4}$ is invertible.
In the nonlinear case, $J^F$ may not be invertible in some specific action points. In such situation, we can add a small perturbation on the current action point or $J^F$ to make $J^F$ invertible. This is a useful trick in classical control methods. Actually, this situation is seldom seen in practice.
> **Q2**: I think in some environments, the equality constraints may even do not depend on the actions but only the states. For example, if we want a car to always stay in a pre-defined path, then $F$ is only a function of states, and $\frac{\partial F}{\partial a}=0$. The proposed approach seems cannot solve this problem.
**A2**: We believe our method can still handle the above problem by modifying the model formulation. Note that the current state $s_t$ is inherently defined by the last state $s_{t-1}$ and last action $a_{t-1}$, i.e. $s_t = f(s_{t-1},a_{t-1})$. Therefore, we can transform action-independent equality constraints into action-dependent ones. For example, if we want a car to always stay in pre-defined path, we can formulate the equality constraints on current action by predicting the next position (state).
> **Q3**: It is unclear to me why we need to satisfy the equality constraints using 4.1 instead of just satisfying them by construction since the domain knowledge required by 4.1 and by construction are the same.
**A3**: Thank you for raising the concern. Section 4.1 actually introduces how to backpropagate the gradient in the construction stage. If we satisfy equality constraints by direct construction and ignore the gradient $\frac{\partial\mathcal{L}}{\partial a^N} \frac{\partial a^N}{\partial a^B}$ from the Construction Stage in Section 4.1, we will only obtain the partial gradient $\frac{\partial \mathcal{L}}{\partial a^B}$ on the basic actions as shown below. This will ruin the training of the policy network. The complete gradient flow is as follows,
$$
\nabla_{a^B} \mathcal{L} = \frac{\partial \mathcal{L}}{\partial a^B} + \frac{\partial\mathcal{L}}{\partial a^N} \frac{\partial a^N}{\partial a^B}
$$
In Appendix E.3, we have also analyzed your concern with experiments. You can find in Figure 7 that RPO with only a partial gradient (without the gradient from construction) performs much worse than RPO with a gradient from construction in cumulative reward. In other words, if we construct the nonbasic actions directly and do not consider them in backpropagation, the performance of RPO will be very poor.
> **Q4**: It seems that there is a mismatch between Figure 3 and Table 2, where the constraint violation of RPO methods are less than $10^{-4}$ in the first two environments in Table 2 but can be larger than $10^{-4}$ in Figure 3. It is also not clear about the meaning of “max violation”.
**A4**: The mismatch between Figure 3 and Table 2 is because the former is for the training process while the latter is for the evaluation process. Here are two reasons why the evaluation performance is better than the training one. Generally, randomness is introduced during the training process for exploration while it is removed in the evaluation process. This trick is commonly seen in RL methods. Besides, we use more maximum GRG updates in the evaluation procedure. This is a practical trick that we have mentioned in Appendix E.1 (lines 625-627).
The "max violation" in Figure 3 means the maximum violation of all constraints in one state, as we illustrated in lines 292-294.
> **Q5**: There are typos even in the mathematical parts of the paper. For example, line 161 $a^N\in\mathbb{R}^m$ -> $a^N\in\mathbb{R}^{n-m}$; line 173 $J^F\in\mathbb{R}^{(m-n)\times n}$ -> $J^F\in \mathbb{R}^{(n-m)\times n}$.
**A5**: Thank you for pointing out our typos. We will correct them in the final version.
> **Q6**: Constraint violation still exists in the experiments, and there is a lack of discussion.
**A6**: Thank you for your constructive suggestion. Our method involves multiple hard equality and inequality constraints. In the practical computation, the tolerance of the constraint satisfaction is usually set as 1e-3 or 1e-4. This is also achieved by our method. The constraint violation you see in the experiments is within this tolerance. we will add the discussion in Section 5.3 of the final version.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I would like to thank the authors for providing additional information and examples. The rebuttal addresses my concerns. Please modify the paper accordingly in the final version. I have raised my recommendation for the paper.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thanks for your positive feedback. We are glad that our response addresses your concerns and will modify the paper accordingly in the final version. | Summary: The authors introduce a policy optimization methodology suitable for continuous control problems with hard constraints. The optimization framework, named Reduced Policy Optimization (RPO), utilizes mathematical tools such as Generalized Reduced Gradient (GRG) and Lagrangian relaxation to address hard (equality) and soft (inequality) constraints respectively. Through GRG basic and non-basic actions are extracted. The policy network produces the basic actions and by solving the equality equations non-basic actions are calculated. The Lagrangian relaxation is implemented in the policy’s cost function to fulfill the inequality constraints. The evaluation of RPO occurred in three different environments the Safe Cartpole, Spring Pendulum, and OPF with battery energy storage.
Strengths: - The proposed methodology is innovative since it introduces the GRG to the RL setting. The goal of RPO is to create a policy optimization with hard constraints in an agnostic manner, which can be very beneficial in the field of RL and set the agent capable to solve a variety of complex tasks.
- The manuscript is well-written and thorough. Related work and relative background are on point.
- The experiment results present significant improvements in comparison with current policy optimization methods.
Weaknesses: - The number of experiments and the complexity is insufficient to back up the algorithm's robustness and versatility for complex tasks.
- To obtain the non-basic actions one should implement an equation solver depending on the task, which makes the implementation more difficult. How can we get define this solver in a generic way?
- In general, off-policy algorithms tend to be more time-consuming than on-policy algorithms. GRG adds more time complexity to the whole optimization process. The authors are aware of that. Hence a training time comparison should have been included.
- The included code contains only the cart pole benchmark.
- The code does not run as is: there is no version defined and a lot of deprecated numpy methods (not present on my machine for example) are used.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How do you think the RPO is probable to perform in complex tasks with hard constraints such as locomotion tasks or tasks requiring a robotic manipulator?
- What is the average training time using the RPO in comparison to vanilla policy optimization (e.g. SAC, DDPG)?
- How can we get define this equation solver for the non-basic actions in a generic way?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately discuss the limitations of their proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your valuable feedback and constructive comments! We itemize the weaknesses or comments you mentioned and answers to them.
> **Q1**: The number of experiments and the complexity is insufficient to back up the algorithm's robustness and versatility for complex tasks.
**A1**: Thanks for your comments. However, we cannot agree with you that our experiments are insufficient. We have tested our algorithm in three environments, and we noticed that similar papers like [R1, R2] tested 3-4 environments as well. Besides, the complexity of our three test environments has been demonstrated in **Global Response A1**.
[R1] Liu P, Tateo D, Ammar H B, et al. Robot reinforcement learning on the constraint manifold[C]//Conference on Robot Learning. PMLR, 2022: 1357-1366.
[R2] Wang, Y., Zhan, S. S., Jiao, R., Wang, Z., Jin, W., Yang, Z., … & Zhu, Q. (2023, July). Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments. In International Conference on Machine Learning (pp. 36593-36604). PMLR.
> **Q2**: To obtain the non-basic actions one should implement an equation solver depending on the task, which makes the implementation more difficult. How can we get define this solver in a generic way?
**A2**: This is a good question. Different from nonconvex optimization, there exist many efficient methods like modified Newton's method that can solve general nonlinear equations that possess global convergence. More theoretical analysis of the convergence of modified Newton's method can be seen in Chapter 10 of [R3]. Actually, in OPF with battery energy storage case, we apply Newton's method in the construction stage. Therefore, we can define the modified Newton's method as a general solver for this problem. Notably, for some special equality constraints, we can use analytical solutions directly instead of the modified Newton's method. That's why we do not define a fixed solver.
[R3] Luenberger D G, Ye Y. Linear and nonlinear programming[M]. Reading, MA: Addison-wesley, 1984.
> **Q3**: The comparison of training time between GRG and off-policy algorithms should be included.
**A3**: Thank you for your valuable suggestion. The training time comparison in the environment Safe CartPole and Spring Pendulum is shown in the following tables. Each result is averaged in 5 runs.
For the time limitation, the training time comparison on OPF with battery energy storage is still in progress, and we will include all results in our final version.
The training time comparison in Safe Cartpole.
| Method | RPO-DDPG | RPO-SAC | DDPG-L | SAC-L | CPO | CUP | Safety Layer |
|:------------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|
| **Training Time (s)** | 190.2 | 375.5 | 150.2 | 268.5 | 68.7 | 75.2 | 1067.0 |
The training time comparison in Spring Pendulum.
| Method | RPO-DDPG | RPO-SAC | DDPG-L | SAC-L | CPO | CUP | Safety Layer |
|:------------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|
| **Training Time (s)** | 242.6 | 419.0 | 174.9 | 309.5 | 98.6 | 81.4 | 3859.7 |
> **Q4 & Q5**: The included code contains only the cart pole benchmark. The code does not run as is: there is no version defined and a lot of deprecated numpy methods (not present on my machine for example) are used.
**A4 & A5**: Thank you for raising the concern. According to the principle of NIPS 2023, we cannot present an anonymous repository during the rebuttal phase. Hence, we will release all codes in our three benchmarks later.
To help you solve the code-running problems, we also present the versions of Python packages used in our work as below.
```
numpy 1.21.6
torch 1.13.1
gym 0.19.0
matplotlib 3.5.2
scipy 1.9.1
scikit-learn 1.0.2
pandas 1.4.4
```
In addition, our experimental operating system is Ubuntu 20.04.6 LTS, the experimental GPU is NVIDIA GeForce RTX 3090 and the CUDA Version is 11.7 for your reference. If you still have any problems, please let us know and we will help you solve them to the experimental results.
> **Q6**: How do you think the RPO is probable to perform in complex tasks with hard constraints such as locomotion tasks or tasks requiring a robotic manipulator?
**A6**: Thank you for raising the concern. We believe that RPO can work in complex tasks with hard constraints. On one hand, as we illustrated in **A1**, our test environment --- OPF with battery energy storage, is actually a complex task with 28 equality constraints and 58 inequality constraints. In contrast, constraints in locomotion tasks or tasks requiring a robotic manipulator like [R4] are generally simpler than our OPF with energy storage. For example, [R4] only contains several constraints. On the other hand, our experiments in classical robot control such as cartpole can also confirm certain transferability of our method in robot control tasks.
[R4] Liu, P., Tateo, D., Ammar, H. B., & Peters, J. (2022, January). Robot reinforcement learning on the constraint manifold. In Conference on Robot Learning (pp. 1357-1366). PMLR.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: I appreciate the thorough answers to my comments and the extra clarifications.
Based on the training times provided, there is not much overhead due to the solver, which sets the RPO as a viable option when dealing with hard constraints. A discussion about the possible solvers and optimization tools could be included in the manuscript/supplementary.
Additionally, the performance of your algorithm in the OPF with battery energy storage task, which consists of a large number of both equality and inequality constraints is a plus.
Considering the above, I am increasing my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your further feedback and increasing the score. We will provide the corresponding discussion and the results of the additional experiments in the final version according to your suggestions. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading of considerate and meaningful suggestions to help us improve our paper. We sincerely appreciate that the reviewers find our work "innovative" (RHA4), "interesting" (u5yD, ixcf) and "novel and well-motivated as the first attempt to introduce GRG to RL" (DYt8) to RL with hard constraints, which can be "very beneficial" (RHA4) in the RL field, and can handle both equality and inequality constraints in "a variety of" "complex" tasks (RHA4, u5yD). We are further glad that the reviewers agree unanimously that our manuscript is "well-written and "thorough" (RHA4, DYt8) and confirm our contributions on both theoretical analysis and "promising" empirical results (RHA4, u5yD, DYt8) to support our algorithm, and "new" RL benchmarks with hard constraint (ixcf).
In the following, we will try to address the concerns/questions of the reviewers and present a detailed item-by-item response to their comments.
Firstly, we would like to offer several clarifications about the common issues from the reviewers.
**Q1**: **(1)** The three test environments used seem simple and insufficient to support the RPO algorithm, and additional benchmark problems should be considered. **(2)** The inequality constraints of three test benchmarks are simple, and more challenging inequality constraints need to be considered.
**A1**: **(1)** We believe our three test environments with both multiple equality and inequality constraints are complex enough to verify the efficiency of our algorithm empirically. For example, **the case of OPF with battery energy storage involves 57 states, 43 actions, and contains 28 equality constraints and 58 inequality constraints with high nonconvexity**. It's worth noting that there is a lack of such complex benchmarks to test our algorithm. We have to spend a lot of effort in developing these RL environments with both hard equality and inequality constraints. Current RL environments with hard constraints mentioned in Section 2 are either not open-source or require numerous modifications since they were designed just for one specific application. Implementing additional benchmark problems, such as UAVs, and robot dogs, is under consideration in our further work.
**(2)** We would like to highlight that there is **no need to evaluate our RPO with more challenging inequality constraints**. As we mentioned in lines 131-135, any nonlinear inequality constraints can always be transformed into equality constraints plus inequality box constraints by adding slack variables. Hence, evaluating RPO with box inequality constraints and nonlinear equality constraints is sufficient to show RPO's generality. For example, if we have a complex inequality constraint $g(s, a) \le 0$. Then, we can always transform it into
$$
\begin{aligned}
g(s,a) + \nu &= 0, \\\\
\nu &\ge 0,
\end{aligned}
$$
where $\nu$ is the slack variable, which can be viewed as the augmented actions in RL.
**Q2**: How to divide the action into basic and nonbasic actions? What's the impact of different division choices?
**A2**: The systematical way to divide the full action space into the basic and nonbasic subspaces below can ensure the solvability of the equality equation.
Firstly, assume there are $n-m$ equality constraints and the full action $a\in \mathbb{R}^n$. We can construct a $0-1$ relationship matrix $E$ with the shape of $(n-m)\times n$, and $E_{ij}$ is to describe whether the equality constraint $f_i$ is related to $a_j$.
For example, if we have 3 equality constraints on action $a\in \mathbb{R}^4$ like that
$$
\begin{aligned}
f_1(a_1,a_2, a_4)=0, f_2(a_2,a_3)=0,f_3(a_1)=0,
\end{aligned}
$$
Then, the relationship matrix will be
$$
\left(
\begin{array}{ccc}
1 & 1 & 0 & 1 \\\\
0 & 1 & 1 & 0 \\\\
1 & 0 & 0 & 0
\end{array}\right)
$$
Now, we need to choose the nonbasic actions that cover the equality constraints as many as possible. This is the maximal matching problem in the bipartite graph. Hence, (1,2,3), (1,2,4), (1,3,4) are valid choices of nonbasic actions here, and then the equations can be solved with such divisions. In contrast, if we choose (2,3,4) as the nonbasic actions, the equations cannot be solved. It is because if basic action $a_1$ is determined, the equations will be
$$
f_1(a_2, a_4; a_1)=0, f_2(a_2,a_3)=0, f_3(\emptyset; a_1)=0,
$$
which is unsolvable.
Actually, part of our experiments is executed with random choices on valid divisions, and we do not observe large variances with different valid divisions. We will add the above analyses in the final version. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Energy-Based Prior Model with Diffusion-Amortized MCMC | Accept (poster) | Summary: The paper presents a new training and sampling procedures for learning energy based generative models. The method is compared to earlier (few-step) MCMC based approaches and to diffusion models. The procedure is evaluated on computer vision tasks.
Strengths: The paper presents an original algorithm, with a clear methodological description, and a large collection of benchmarking results.
Weaknesses: The paper could make clearer for the general audience why additional energy based modeling methodology is needed. As a researcher who has been working extensively on diffusion models for the past two years, but has not worked with (other) energy based models in approaches in recent years additional context in goals of this direction and the limitations of previous approaches would be helpful.
For example, the authors comment that EBMs are "closely related to" DDPMs. Which do not suffer from the "sampling issue" associated with "non-convergent short-run MCMC" used for EBMs. Is there something that can be accomplished with EBMs that is not solved by DDPMs that motivates their continued study?
The paper is also unclear with respect to the claim for "theoretical evidence that the learned amortization of MCMC is a valid long-run MCMC sampler". Could the authors make this theoretical claim more explicit (e.g. as a proposition or theorem)?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: No further questions.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for your detailed comments
We sincerely thank you for your time and detailed comments. Below, we provide point-to-point responses to hopefully address the concerns you have.
- > Additional context in goals of this direction and the limitations of previous approaches would be helpful. Is there something that can be accomplished with EBMs that is not solved by DDPMs that motivates their continued study?
- Thank you for bringing this matter to our attention. We have generally introduced the background and related works of this direction in sec. 1 and 2 in the main text and sec. A in the appendix. We will make sure to more explicitly discuss these related works in revision.
- An example of "something that can be accomplished with EBMs that is not solved by DDPMs" can be found in [1] with a brief but insightful discussion in sec. 4.2 of [1]. For tasks that requires explicit likelihood function, e.g., compositional generation, the energy-based parameterization (EBM) is preferred over epsilon-parameterization (DDPM) for its flexibility in modeling. This energy-based parametrization enables the use of more accurate samplers for greatly improved sample quality and convergence in the compositional generation task compared with DDPM.
- More broadly speaking, there are certain distributions where the epsilon-parameterization cannot generate decent samples [2]. With the standard epsilon-parameterization one can only utilize unadjusted samplers, which perform well in practice. But for target distributions such as those with lighter-than-Gaussian tails [2], the unadjusted sampler chain is transient and may not produce desired samples. Our method in this submission learns both the latent space EBM and its corresponding amortized sampler, thus combining the strengths of both sides, i.e., a well-learned explicit (unnormalized) density and a strong sampler that greatly mitigate the sampling issue.
- > Could the authors make the theoretical claim more explicit (e.g. as a proposition or theorem)?
- Thank you for pointing this out. We will consider revising our paper accordingly.
We hope our answers could further address the reviewer’s concerns of this work. If you have any additional questions/comments/concerns, please feel free to let us know here.
[1] Du et al. Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC. ICML 2023.
[2] Roberts and Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, pp. 341–363, 1996.
---
Rebuttal Comment 1.1:
Title: Thank you and follow-up question
Comment: Thank you for your replies.
To follow-up on your question, in the context of your paper is a DDPM an example of an energy based model once (re-)parameterized in terms of its implied approximation to the gradient of log-densities? If so statement (w.r.t. reference [1] above) this seems to be more of a statement about what parameterization of diffusion models than energy based models as a distinct class of models.
As the motivation for why this class of models is considered and how it might hope to address limitations of existing diffusion base generative models remains unclear to me, I maintain my score and low-confidence.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply
Comment: Thank you for your prompt response.
We would like to point out that although DDPM estimates the gradient of the log density of *noise-perturbed versions of the target density*, this gradient estimate **does not** translate to reliable estimate of the *target density (without noise)* via re-parametrization. For a given noise level $\sigma > 0$, the denoising score matching or DDPM objective is not a consistent objective for learning the underlying target density without noise. Annealing the noise level $\sigma \to 0$ can mitigate this problem, but the gradient estimation is only reliable in the immediate vicinity of the modes of the target distribution, where the density is high. For low-density regions of the target distribution, the objective may not have enough evidence to estimate score functions accurately, due to the lack of samples [1].
In contrast, EBM is typically learned through Maximum Likelihood Estimation, which theoretically is the most accurate estimate in terms of asymptotic variance [2]. This formulation together with its learning algorithm produces much more reliable explicit (unnormalized) density estimation. From this point of view, these two models are different.
In our previous response, we discussed the importance of energy-based modeling, i.e., learning the density of target distribution explicitly. We hope the key differences between EBM and DDPM mentioned above in this response further clarifies the necessity of learning the explicit density of the target distribution.
Specifically in the set-up of our submission, we care about modeling the latent space. The latent space EBM provides explicit prior probability density $p_\alpha(z)$ for modeling latent variables $z$, and consequently defines explicit posterior density $p_\theta(z|x) \propto p_\alpha(z)p_\beta(x|z)$ for posterior inference upon which we build our approximate MLE learning algorithm. Simply plugging a DDPM into the latent space does not provide us with this well-defined MLE learning framework and could lead to problematic learning algorithms.
Finally, we would like to kindly refer to [3] for a comprehensive discussion about EBM and its connection with DDPM. We hope our response helps to address your concerns and explains why latent space EBM is considered in this work. Please feel free to let us know if you have any additional questions/comments/concerns.
[1] Yang Song, and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. NeurIPS 2019.
[2] Peter J. Bickel and Kjell A. Doksum, Mathematical Statistics: Basic Ideas and Selected Topics, Volume I.
[3] Yang Song and Diederik P. Kingma. How to Train Your Energy-Based Models, arXiv:2101.03288 | Summary: This paper proposes the DAMC sampler (Diffusion-Amortized MCMC) and develops a new learning algorithm for LEBM (Latent-space Energy-Based Model) based on it. Theoretical and empirical evidences are provided for the effectiveness of our method.
Strengths: The paper is generally well-written. The idea of amortizing the LD with DDPM in learning the LEBM seems to be new.
Weaknesses: The basic idea of using an auxiliary model to amortize the LD in learning energy-based models has been well known and explored in the literatures, e.g. in [a,b] to name a few. Methodology connection and experiment comparison with those methods are needed.
By looking at FID on CIFAR-10 (Table 1), the performance of the proposed method (FID 57.72) is far behind previous methods [a,b] (33.61, 20.9).
In the current literature, long-run MCMC analysis is typically conducted over a range of 10,000 to 100,000 Langevian Dynamics iterations. However, in this paper (Figure 4), the authors perform a much shorter 2,500 Langevian dynamics update, which is considerably less extensive compared to other studies in the field.
[a] Cooperative Training of Descriptor and Generator Networks, arXiv:1609.09408v3
[b] Learning Neural Random Fields with Inclusive Auxiliary Generators, arXiv:1806.00271v4
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for your insightful comments
We sincerely thank you for your time and thoughtful comments! Below, we provide point-to-point replies to your comments that hopefully would address the concerns you have.
- > Methodology connection and experiment comparison with existing methods using an auxiliary model to amortize the LD, e.g., [a] and [b] are needed. The FID scores on CIFAR-10 are far behind [a] and [b]
- Thank you for pointing us to these interesting works! We have generally introduced the amortized MCMC methods including those using an auxiliary model in sec. A in the appendix. We will make sure to explicitly connect and discuss these methods including [a,b] in revision.
- We would like to point out that both [a] and [b] trains the auxiliary models and energy-based models in the pixel space, which is essentially different from our set-up. In this paper, we focus on learning the energy-based prior and its corresponding posterior and prior sampler in the latent space of generative models. One of the key differences is that model design for learning the latent space samplers requires non-trivial extra efforts, since the latent space is ever-changing during training unlike the data space; delicate designs are often needed to balance the learning of this amortized model and other jointly learned models for stable training, if we want to employ these pixel space methods. In this work, we follow the method of [c] to train a single neural network to parameterize both the prior and posterior sampling models to deal with this problem.
- In addition, as mentioned in sec. 3.3 L184, we use the same network architectures as in previous works [d,e] for fair comparison with these methods. To be specific, as shown in sec. C in appendix the generator network used in this work for the CIFAR-10 experiment uses 5 layers of transposed convolution, while the generator network in [b] uses a ResBlock-based deeper generator. The FID scores are therefore not directly comparable between ours and those reported in [a] and [b]. We hope to explore more advanced architectures to address these issues in future work.
- > In the current literature, long-run MCMC analysis is typically conducted over a range of 10,000 to 100,000 Langevian Dynamics iterations.
- Thank you for bringing this matter to our attention! We checked the performance of our model for longer LD chain on CIFAR-10, SVHN and CelebA64 datasets by calculating the FID scores of long-run samples. The results are summarized below. We can see that our model could produce consist results with long-run chains.
|Steps|100|200|2500|10000|100000
|:--|:--:|:--:|:--:|:--:|:--:|
|CIFAR-10|60.89|60.73|61.20|60.76|61.20|
|SVHN|21.17|20.91|20.07|20.68|20.71|
|CelebA64|35.67|35.39|35.40|35.29|35.17|
Thank you again for providing constructive and thoughtful feedback on our submission. If you have any additional questions/comments/concerns, please feel free to let us know here. **Otherwise, we would appreciate it if you would consider raising your rating of this submission**.
[a] Cooperative Training of Descriptor and Generator Networks, arXiv:1609.09408v3
[b] Learning Neural Random Fields with Inclusive Auxiliary Generators, arXiv:1806.00271v4
[c] Classifier-Free Diffusion Guidance. NeurIPS 2021 Workshop.
[d] Learning latent space energy-based prior model. NeurIPS 2020.
[e] Adaptive multi-stage density ratio estimation for learning latent space energy-based model. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Title: Concern about the insufficiency of experimental results remains
Comment: --after reading feedback--
Thanks for the feedback from the authors.
The new result from long-run MCMC seems good.
Learning the energy-based model in the latent space may require more delicate design than learning in the pixel space. But the basic math required is not so innovative.
The authors refer their architectures to previous works [d,e], and I can see that the network architectures used in this work are weaker than those used in [a,b]. However, remarkably, the results in [a,b] dates back to 2016-2018. If the results in 2023 cannot show improvements over these classic ones, I'm not convinced that the claimed effectiveness of the new method in this work makes a real progress in learning EBMs. One may doubt whether the proposed method can work or not when using a deeper architecture. My main concern about the insufficiency of experimental results remains. I suggest that the authors show results using a bit deeper advanced network, which is not difficult for experiments. I think, such comparison would be really beneficial for the community to advance the EBM study.
---
Reply to Comment 1.1.1:
Comment: Thank you for your further response. We provide point-to-point replies below.
- > If the results in 2023 cannot show improvements over these classic ones, the proposed method might not make a real progress in learning EBMs.
We respectfully disagree with this statement for the following reasons.
- **The proposed method and the mentioned previous methods work for different models in completely different set-ups.** We would like to continue our discussion in our previous response to further clarify that these methods are essentially different.
As mentioned in sec. 1 L16-28 in the main text, we specifically consider *learning an EBM in the latent space for latent variables $z$ (LEBM) as an informative prior* for the generator network in this work. Although our focus is also on learning an EBM, the LEBM stands on the generator to capture the regularity of its latent variables $z$, while the main generation process is done by the generator network $g$. The learning process is built upon the posterior inference of $z$ to formulate an EM-like approximate MLE learning algorithm. In this formulation, the generator is directly supervised by the reconstruction error of the observed data $x$.
In contrast, in the set-up of [a,b], the EBM resides explicitly in the data space for $x$ and serves as a refiner of the initial output of the generator network. The learning process does not involve inference of $z$ or modeling of the latent space. In [a], the refined output is further used as the training sample of the generator, while in [b] the generator is trained by optimizing the learned energy score function.
In sum, the key differences between our set-up and [a,b] include, at least, the following ones:
- Although we all consider learning EBMs, the roles of EBMs are completely different. In our work, the light-weight LEBM is used as a flexible yet powerful prior for the generator, while in [a,b] the EBMs are used as a refiner or the main generation model; the learning processes in [a,b] do not involve inference of $z$ or modeling of the latent space.
- Although both our work and [a,b] involve learning the generator, we can see in [a,b] the generator is supervised by a different objective that incorporates the training signal from the data-space EBM, rather than directly optimizing the reconstruction error of the original observed data $x$, i.e., $\log p(x | z)$ in our set-up.
Therefore, we think the differences of FID scores of these methods **may not** serve as valid indicators, since the methods work for different models in completely different set-ups and are not directly comparable.
- **The proposed method and the mentioned previous methods are not direct competitors, but are actually complementary to each other.**
Following our discussion above, we would like to add that a more reasonable comparison would be incorporating the LEBM in the latent space of the generator used in [a,b], and jointly train the LEBM, generator and the data-space EBM to see whether the composed methods achieve higher performance. These methods are not direct competitors, but can be complementary to each other to further improve previous methods. In our initial submission here, we prefer to keep our model and learning method clean and simple, without involving extra networks and learned computations. We are happy to explore these directions in future works.
- > One may doubt whether the proposed method can work or not when using a deeper architecture.
We would like to point out that in sec. 4.1 L215-228 and tab. 1 and fig. 2 CelebA-HQ columns we have explicitly provided the results on higher dimension data (256x256) to show that our method can scale up with deeper architecture (supp. C tab. 2).
To further verify the effectiveness of this method on more advanced architecture, in sec. 4.1 *GAN inversion* paragraph L229-254 we have provided further results using the StyleGAN network as the generator.
Therefore, we believe we have provided positive evidence that our method can effectively work with deeper architectures to scale up.
- > Suggest that the authors show results with a bit deeper advanced network, which is not difficult for experiments.
Thank you for the constructive suggestion. Due to the time limit, we were only able to experiment on slightly deeper generators and report very preliminary results on CIFAR-10. Here we only add `conv3x3` layers to the original one; `+N` shows the number of additional layers and `w/ res` means adding residual connection to the conv layers. Please note that these models have not reached their best performances, but have already shown some initial improvements over the original model.
||+0|+1|+2|+4 w/ res|
|:--:|:--:|:--:|:--:|:--:|
|FID|57.72|57.63|55.50|54.59|
We hope our response helps to address your concerns. Please feel free to let us know if you have any additional questions.
Title: Thank you for your reply | Summary: The authors propose DAMC, an amortization of MCMC sampling, via a scheme based on diffusion models, as an alternative to pure MCMC sampling, which usually suffers from either long mixing time or from being short and biased,for priors and posteriors in energy based models. The method is theoretically sound, and experimentally argued for.
Strengths: The theoretical analysis is sound. The algorithm is clear. The experiments are convincing.
Weaknesses: N/A
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Are there some constraints that the latent space needs to satisfy for the proposed method to have an advantage over the vanilla MCMC counterpart ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I see None for now.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for your insightful comments
We sincerely thank you for your kind words and thoughtful comments! Below, we provide point-to-point responses to hopefully address the concerns you have.
- > Are there some constraints that the latent space needs to satisfy for the proposed method to have an advantage over the vanilla MCMC counterpart?
- Thank you for this very insightful input! For now, we see no obvious constraints on the latent space for the proposed method to outperform the vanilla short-run MCMC counterpart. We observed that our method shows a clearer advantage over the vanilla short-run MCMC method when the target distribution is highly multimodal or generally hard for short-run MCMC to fully explore.
- We also see no obvious issues for now on generalizing our method to learning amortized samplers for (unnormalized) densities or distributions other than learning energy-based models (including those arise in molecule dynamics and simulated annealing optimization). We are excited about these problem as a direction for future work.
Thank you again for providing constructive and thoughtful feedback on our submission. If you have any additional questions/comments/concerns, please feel free to let us know here.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers ! | Summary: This paper proposed a diffusion-based amortised method to address the short-run MCMC samplers issues in the latent-space energy-based models. One interesting part is that it interleaves the distill T-steps of Langevin dynamics and KL divergence minimisation to sample from the target distribution $\pi$. Regarding the choice of amortised sampler parameters $\phi$, it uses the gradient of DDPM objective function to optimise them. The experimental results has verified the effectiveness of the DAMC.
Strengths: The paper has the following strengths:
**1**, the usages of KL minimization and amortised MCMC transition is a well defined framework. Also, the usages of DDPM gradient for the long-run MCMC sampling in LEBMs is interesting.
**2**, the experimental evaluation is sufficient and clearly verifies the advantages of DAMC.
Weaknesses: Regarding the weakness, I think the paper may need another round of polish. Some notations are not well defined before use and some introductory statement is not consistent. For example:
**1**, p_{uncond} may need more explanation, rather than a short word in the input of Algorithm.
**2**, $z_+^{(i)}, z_-^{(i)}$ are a bit confusing.
**3**, it may be a better idea if the notations of section 2.1 and 2.2 are consistent
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Sorry I am not an expert in this topic. I do not have questions for the authors.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I believe the authors has addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for your constructive comments
We sincerely thank you for your kind words and thoughtful comments! Below, we provide point-to-point responses to hopefully address the concerns you have.
- > p_{uncond} may need more explanation, rather than a short word in the input of Algorithm. $z_{+}$ and $z_{-}$ are a bit confusing.
- Thank you for pointing this out. We have introduced the meaning of p_{uncond} in sec 3.3 L175-180. $z_{+}$ and $z_{-}$ denotes the posterior and prior samples respectively.
- We agree that more explicit explanation of the hyperparameter and notations could make this draft more readable. We will make sure to revise our paper accordingly.
- > It may be a better idea if the notations of section 2.1 and 2.2 are consistent
- Thank you for bringing this matter to our attention! We choose these notations to make sure that they are consistent with those used in Fig 1. We will revise these notations according to your kind suggestion.
Thank you again for providing constructive and detailed feedback on our submission! If you have any additional questions/comments/concerns, please feel free to let us know here. | Rebuttal 1:
Rebuttal: ### Summary of our response
We thank the reviewers for their insightful and constructive comments and careful reviews of our paper! We appreciate that the reviewers consider our submission "well-written and well-motivated", "clearly stated", "new" and "interesting" and provide "convincing", "diverse" and "extensive" experiment results. We have provided point-to-point replies to your comments that hopefully would address the remaining concerns you have. We summarize our response as follows:
- Clarification
- Key differences between our method and the ideas mentioned in [1, 2] and [3, 4]. (Response for reviewer x1ko and hz9t)
- Discussion for advantages of energy-based parametrization over epsilon-parameterization (Response for reviewer rkuE)
- Further Experiments
- Toy example from [5] as a proof-of-concept to show that the proposed amortization scheme can converge to the true distributions. Please see the attached PDF file for the additional results. (Response for reviewer x1ko)
- Adding more baselines including training DDPMs in the latent spaces of VAE and ABP models using the current network architectures. (Response for reviewer x1ko)
- Adding ablation studies for steps of Langevin Dynamics $T$ and the model capacity of the amortizer $q_{\phi}$. (Response for reviewer x1ko)
- Adding quantitative results for longer-run chain analysis. (Response for reviewer hz9t).
- Including missing references pointed out by reviewers.
- Polishing notations and writing in general as suggested by reviewers.
Thank you again for providing constructive and detailed feedback on our submission. If you have any additional questions/comments/concerns, please feel free to let us know.
[1] Pang et al. Learning Latent Space Energy-Based Prior Model. NeurIPS 2020.
[2] Pandey et al. DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents. TMLR 2022.
[3] Xie et al. Cooperative Training of Descriptor and Generator Networks. TPAMI 2018.
[4] Song and Ou. Learning Neural Random Fields with Inclusive Auxiliary Generators. arXiv:1806.00271.
[5] Taniguchi et al. Langevin Autoencoders for Learning Deep Latent Variable Models. NeurIPS 2022.
Pdf: /pdf/11d99a90e1078424547ab328684303be60df52dc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a diffusion-based amortized MCMC method for sampling the prior and posterior in latent space energy-based models. The paper provides some theoretical evidence using directly the result from Li et al., 2017. The paper shows the effectiveness of the proposed method throughout an extensive campaign on a variety of tasks such as image generation and construction, anomaly detection.
Strengths: - This paper aims at tackling an important task in deep latent variable models, which is to improve energy-based prior model.
- The paper is well-motivated and is well-written.
- The experiments on image datasets are extensive and diverse.
Weaknesses: - The idea of amortizing the short-run MCMC sampling of the prior and posterior distributions is already proposed in the original work of Pang et al., 2020. This paper aims to improve this amortization using a diffusion model. In addition, the theoretical contribution is weak, as the convergence property is already shown by Li et al., 2017.
- Although the authors claim that the theoretical evidence that the learned amortization of MCMC is a valid long-run MCMC sampler, this result is taken directly from Li et al., 2017. As such, the authors should show that the proposed amortization scheme can converge to the true distributions, at least via toy examples. For example, the authors could use examples from this work [1].
- The improvement is expectable as the authors use a very powerful diffusion model to amortize the prior and posterior. However, this increases the computational costs, as the authors discussed at the end of Section 4.1.
- As, again, the authors employ a power diffusion model in latent space. To be fair, the authors should consider the baselines of using diffusion models in the latent space of latent variable models; for example [2], at the very least.
[1] Taniguchi et al., Langevin Autoencoders for Learning Deep Latent Variable Models. NeurIPS 2022.
[2] Pandey et al., DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents. TMLR 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - There are a lot of hyper-parameters in the proposed methods. How do the authors choose $T$?
- It would be great if the authors could ablate the amortization gap of the proposed method by considering different capacities of the $q_{\phi}$
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Thank you for your detailed and insightful comments
We sincerely thank you for your time and constructive comments! Below, we provide point-to-point replies to your comments that hopefully would address the concerns you have.
- > The idea of amortizing the short-run prior and posterior MCMC sampling is already proposed in [1].
- As mentioned in the 2.6 and A.11 sections of [1], they propose to amortize the posterior sampling process with the
network $q_{\phi}(z|x)$ following the variational learning (VI) scheme introduced by [2]. However, the Gaussian (or other tractable density) assumption
of the posterior distribution made in this scheme could greatly limit the expressivity of $q_{\phi}(z|x)$ [3,4]. In this work, we use a conditional diffusion model to address this issue. As shown in tab. 1, we have compared our model with the NCP-VAE baseline, which indeed trains the inference network through VI to learn the energy-based prior. We can see that our method significantly outperforms the NCP-VAE. On the CIFAR-10 dataset, the FID and MSE scores are 78.06 vs. 57.72, and 0.054 vs. 0.015.
- In the A.11 section, [1] mentioned that one can learn a synthesis network $q_{\psi}(z)$ for prior sampling, but did not provide a concrete solution for this problem. Model design for amortizing the prior sampling requires non-trivial
extra efforts, since the latent space is ever-changing during training; delicate designs are needed to balance the learning of this amortized model and other jointly learned models for stable training. In this work, we follow the method of [5] to train a single neural network to parameterize both the prior and posterior sampling models to deal with this problem.
- > The authors should show that the proposed this amortization scheme can converge to the true distributions, at least via toy examples in [4].
- Thank you for pointing us to this very interesting example! We have implemented the neural likelihood examples following the same set-up mentioned in sec. 5.1 and B.1 in [6]. We choose to use a more complex prior distribution, i.e., 2-arm pinwheel-shaped prior distribution instead of a standard normal one to make sure that the true posterior distributions are multimodal. We have attached the visualization of the convergence of the learned posterior distributions by our model to the ground-truth ones (please see the global response). We can see that our model could faithfully reproduce the ground-truth distributions. We will add these results in revision and make explicit connections to [4].
- > The authors use a very powerful diffusion model to amortize the prior and posterior. However, this increases the computational costs.
- We would like to point out that we actually used a very light-weight MLP-based diffusion for all the experiments in this paper, since our models are in the low-dimensional latent space. As mentioned in sec 4.1 L257-258 in the main text and G.1 in the appendix, the number of parameters in the diffusion model is only around 10% of those in the generator.
- Consequently, the computational costs induced by this model is very marginal. To be specific, as mentioned in sec 4.1 L258-261, with the batch size of 64, our method takes ~0.3s for prior sampling, while 100 steps of short-run LD with LEBM takes 0.2s; posterior sampling with the proposed method takes ~1.0s, while posterior LD sampling takes ~8.0s because it requires back-propagations through a much heavier generator network.
- > Should consider the baselines of using diffusion models in the latent space of latent variable models; for example [6].
- Thank you for pointing us to this very interesting work! However, we notice that DiffuseVAE actually trains a diffusion model in the pixel space to refine the output of a VAE model. This is essentially different from our set-up, where we learn the models in the latent space of generative models. We will dicuss the key differences between DiffuseVAE and our work in revision.
- We have compared our method with directly training diffusion models in the latent space. In the NALR-LEBM column in tab. 4 in sec 4.3 and L292-300, we compared our method with learning a diffusion model in the pre-trained energy-based latent space on CIFAR-10 dataset. We further add experiments of learning this model in the pre-trained VAE and ABP model latent spaces summarized below. We can see that our method greatly outperforms these baselines using the same network archs.
||VAE|ABP|NALR|Ours
|:--:|:--:|:--:|:--:|:--:|
|FID|102.54|69.93|64.38|**57.52**|
|MSE|0.036|0.017|0.016|**0.015**|
- > How do the authors choose $T$? Would be great to consider different capacities of the $q_{\phi}$.
- Thank you for bringing this matter to our attention! For prior sampling, we follow the set-up of [1] for fair comparison. We further add experiments on CIFAR-10 for different posterior steps and capacities of $q_{\phi}$ summarized below. We used $T=30$ for training in this paper. We can see that larger $T$ and larger $q_{\phi}$ brings very marginal improvement.
||T=10|T=30|T=50|
|:--:|:--:|:--:|:--:|
|FID|74.20|**57.72**| 57.03|
|MSE|0.016|**0.015**| 0.015|
- Here $f$ stands for the factor for the $q_{\phi}$ capacity, e.g., $f=2$ means 2x the size of the original model.
||f=1/4 |f=1/2 |f=1|f=2|f=4
|:--:|:--:|:--:|:--:|:--:|:--:|
|FID|116.28| 80.28|**57.52**|57.82|57.56|
|MSE|0.017| 0.016|**0.015**|0.015|0.015|
Thank you again for providing constructive and detailed feedback on our submission. If you have any additional questions/comments/concerns, please feel free to let us know here. **Otherwise, we would appreciate it if you would consider raising your rating of this submission**.
[1] Pang et al. LEBM. NeurIPS 2020.
[2] Kingma et al. VAE. 2013
[3] Rajesh et al. HVAE. ICML 2016.
[4] Taniguchi et al. LAE. NeurIPS 2022.
[5] Ho et al. Classifier-Free Guidance. NeurIPS 2021 Workshop
[6] Pandey et al., DiffuseVAE. TMLR 2022
---
Rebuttal Comment 1.1:
Title: Post Rebuttal Reply
Comment: I thank the authors for their response and the effort they put in resolving my concerns. The new results are very encouraging.
Might I kindly ask the authors to provide the code for toy examples? This would enable both me and other reviewers can quickly validate the proposed method and the results.
---
Reply to Comment 1.1.1:
Title: Anonymous Code Submission
Comment: We thank the reviewer for your prompt reply! We are pleased to hear that our new results are very encouraging. We have asked our AC for details about anonymous code submission, since we are only allowed to submit the code link to AC according to the instructions this year. We have prepared the code and will submit the link immediately once permitted. Here we provide more details about our code submission to help you reproduce and validate our results.
- Environment Specification
- We use Pytorch to train our models. We did not use other third-party python packages for implementation. Please feel free to let us know if there are any dependency issues. We will be more than happy to help.
- Version of packages used in the toy example experiments:
- Python == 3.9.2
- Pytorch == 1.10.0
- numpy == 1.21.2
- matplotlib == 3.3.4
- Code Structure
We have added comments to most functions and code blocks in our code. The code files should have the following structure:
```
toy_code/
src/
diffusion_helper_func.py
diffusion_net.py
toy_example.py
```
- `toy_example.py`is the main file. It includes the training algorithm, data generation pipeline and visualization functions.
- `src/diffusion_net.py` contains the detailed network structure of the diffusion network.
- `src/diffusion_helper_func.py` contains helper function for implementing the denoising diffusion process.
- How to Run
To train the model on the toy example, you can simply run the following command in the `toy_example` folder.
```
CUDA_VISIBLE_DEVICES=<DEVICE_ID> python toy_example.py --seed <RANDOM_SEED_TO_SPECIFY>
```
For example,
```
CUDA_VISIBLE_DEVICES=0 python toy_example.py --seed 0
```
- Here `--seed` argument specifies the random seed, which basically decides the ground-truth posterior distribution. The script will automatically generate a `logs/toy/<TIMESTAMP>` folder in the `toy_code` folder, where `<TIMESTAMP>` indicates the time you started this training process.
- There will be two automatically created additional folders in `logs/toy/<TIMESTAMP>` once running the script: i) `ckpt` which saves the trained weights and ii) `viz` folder that saves the visualization of ground-truth and learned posterior distributions. The image file names are `<ITERATION>_lang_post_Q` and `<ITERATION>_lang_post_gt`, which indicates visualization of the learned distribution and the ground-truth distribution respectively. These visualization results are saved every 100 iterations.
- Important Tips about Training
- For most random seeds, we observed that our learned sampler could achieve decent approximation of the ground-truth posterior distributions obtained by long-run langevin dynamics within 300-3000 training iterations. This would take from several minutes to an hour or so on a NVIDIA RTX A6000 GPU. The training process takes ~2GB GPU memory. It is possible that there are some extreme cases where longer training iterations are needed to produce decent results.
- For some random seeds, the default 1000-step langevin dynamics for sampling ground-truth posterior distribution might not converge. You may consider using 2000 or more steps by modifying the `g_l_steps` argument in the `sample_langevin_post_z` function at L277 in the `toy_example.py`. One possible sign is that the `g_loss (avg) Q` (reconstruction error obtained by learned posterior samples) is significantly lower than `g_loss (avg) L` (reconstruction error obtained by langevin dynamics samples).
Finally, we kindly ask the reviewer to not distribute our code, since we have not officially published our work. Thank you again for your time for evaluating our work, please feel free to let us know if you have any additional questions/comments/concerns.
---
Reply to Comment 1.1.2:
Title: Anonymous Code Submitted
Comment: Thank you for your patience. We have submitted the anonymous code for reproducing the results for the toy example to our AC. Please feel free to let us know if you have any problems with the code. | null | null | null | null | null | null |
Entropic Neural Optimal Transport via Diffusion Processes | Accept (oral) | Summary: The paper proposes to solve dynamic entropic optimal tansport (EOT), also known as Schrödinger bridge problem, with nerual solver. Specifically, the authors propose a saddle-point, maximin, formulation of EOT, yielding a GAN-resemble algorithm that can be trained in an end-to-end fashion. Experiments are conducted on 2D toy datasets and images translation.
Strengths: - The saddle point reformulation of EOT is interesting. The proposed algorithm can be viewed as a stochastic control problem, where the terminal cost function $\beta$ is learned so that the resulting policy approaches terminal distribution.
- Writing is generally clear and easy to follow. Sufficient related works and preliminaries are included.
- Experiment are extensive and include many related baselines.
Weaknesses: - The proposed algorithm aims to minimize (12), which is not well-defined when $\epsilon$=0, as the KL term will blow up. Even though $\epsilon$=0 is algorithmically applicable, it makes the current algorithm disconnected from the mathematical framework. I'll be more convinced if the authors can provide additional justification (maybe connection to OT).
- The proposed method is closely related to recent maximim OT [1], which consists of the same two networks (potential + policy) and the same training losses. From my understanding, the two methods coincide when $\epsilon$=0 and $N=1$. Are the authors aware of [1]? Given that the proposed ENOT seems to work best when $\epsilon$=0, can the author compare with [1]?
- Given that DiffSB [13] was compared throughout most Sec 5, I suggest the authors to compare in Sec 5.4 to [2], which applies DiffSB to unpaired super-resolution image datasets.
- While Sec 2 has introduced sufficient background and comparison between EOT and SB, which I do appreciate, I think their connection to Sec 4, which is the main content, is rather weak. Given that the proposed algorithm closely relates to dual formulation (e.g. (26) in Appendix), I suggest including those parts in Sec 2.
[1] On amortizing convex conjugates for optimal transport
[2] Conditional Simulation Using Diffusion Schrödinger Bridges
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Experiments seem to support that $\epsilon$=0, where the dynamics reduce to flow, works much better than SDE. This seems to contradict recent insights from diffusion models [1], where SDE usually performs better than ODE on the same tasks. Can the authors comment on that?
- why is there a "1/|f|" term in Alg 1, when computing the KL? And what's $f_{n,m}$? I understand tht $f_n$ is the drift output at time step $n$.
[1] Elucidating the Design Space of Diffusion-Based Generative Models
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are included in Sec6, where the authors mentioned the potential computational burden caused by simulation and back-prop through the SDE dynamics during training.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer KyaP, thank you for your comments. Here are the answers to your questions.
**(1) Connection to OT for unregulated case $\epsilon=0$.**
**We emphasize that our work focuses on developing a new algorithm for solving entropic OT and the equivalent SB problem. This implies that $\epsilon>0$.** We present the empirical results of our algorithm for $\epsilon=0$ only for completeness since it *technically* allows the use of $\epsilon=0$ (see lines 226-227). Given your interest in studying this case in detail, we provide **the proof sketch** to show that for $\epsilon=0$ our saddle-point objective
$$
\sup_{\beta} \inf_{T_f} \mathcal{L}(\beta,T_f) = \mathbb{E}_{T\_f}( \int\_0^1 ||f(X_t, t)||^2 dt) - \int\_{\mathcal{Y}} \beta(y) d\pi_1^{T_f}(y)+ \int\_{\mathcal{Y}} \beta(y) d\mathbb{P}_1(y) ,
$$
$$
s.t. \quad X_0 \sim \mathbb{P}_0, \quad T_f : dX_t = f(X_t, t)dt.
$$
is a max-min reformulation of the **Benamou-Brenier** problem (the dynamic OT with $\ell^{2}$ cost):
$$
\mathcal{L}^*=\inf_f \mathbb{E}\_{T_f}[\int_0^1 ||f(X\_t, t)||^2 dt], \quad s.t. \quad X\_0 \sim \mathbb{P}\_0, \quad T\_{f} : dX_t = f(X_t, t)dt, \quad X_1 \sim \mathbb{P}_1.
$$
which searches for an ODE drift $f$ which moves the mass of $\mathbb{P}_0$ to $\mathbb{P}_1$.
**SKETCH OF THE PROOF**:
*Step 1 (Auxiliary functional, analog of Lemma B.3).*
We introduce an auxiliary functional:
$$\widetilde{\mathcal{L}}(\beta,H)=\int\_{\mathcal{X}} \|x-H(x)\|^{2}d\mathbb{P}\_{0}(x)-\int\_{\mathcal{X}} \beta(H(x))d\mathbb{P}\_0(x)+\int\_{\mathcal{Y}} \beta(y)d\mathbb{P}\_1(y),$$
and consider the following maximin reformulation of OT with the quadratic cost [1, Eq.4]:
$$
\sup_{\beta}\inf_{H}\widetilde{\mathcal{L}}(\beta,H) = \inf\_{T\sharp \mathbb{P}\_0 = \mathbb{P}\_1} \int\_{\mathcal{X}} ||x - T(x)||^2 d\mathbb{P}\_0(x) = \mathcal{L}^{*}.
$$
*Step 2 (Solution of the inner problem is always an OT map).*
It can be shown that the minimizer of the inner problem exists ($H^{\beta}$), i.e.:
$$
H^{\beta}\in \text{arg}\min\_{H\sharp\mathbb{P}_0=\mathbb{P}'}\int\_{\mathcal{X}} \big\lbrace\|x-H(x)\|^{2}- \beta(H(x))\big\rbrace d\mathbb{P}\_0(x)
$$
Moreover, $H^{\beta}$ is an OT map between $\mathbb{P}_0$ and $\mathbb{P}'\stackrel{def}{=}H^{\beta}\sharp \mathbb{P}_0$.
*Step 3 (Equivalence for inner objective values).*
Since $H^{\beta}$ is the OT map between $\mathbb{P}\_0,\mathbb{P}' $, it can be represented as an ODE with zero acceleration ($\frac{df^{\beta}(x(t), t)}{dt}=0$) solution $T_{f^{\beta}}$ to the Benamour Brenier problem between $\mathbb{P}\_0,\mathbb{P}'$, for which $\|x-H^{\beta}(x)\|^{2}=\int_{0}^1 ||f^{\beta}(X_t, t)||^2 dt$, i.e.:
$$\inf_{H}\tilde{\mathcal{L}}(\beta,H) = \inf_{T_f}\mathcal{L}(\beta, T_f).$$
*Step 4 (Equivalence of the saddle point objective).*
Take $\sup$ over $\beta\in\mathcal{C}\_{b,2}(\mathcal{Y})$ and get the final equivalence:
$$\sup\_{\beta}\inf\_{H}\widetilde{\mathcal{L}}(\beta,H)= \sup\_{\beta}\inf\_{H}\mathcal{L}(\beta,T_{f})= \mathcal{L}^{*}.$$
We cannot give the full proof due to the length limit of the answer, but per request, we can provide it. **At the same time, our paper focuses on solving the problem with $\epsilon > 0$.**
**(2) Comparision with [1].**
We are aware of [1]. However, since they address the unregularized OT problem, we have not compared our results to theirs. As per your request, we have trained our method with $\epsilon=0$ using the same image benchmark setup [2] and present our results alongside the results from Table 2 of [1] **in Table 4 of the attached pdf**. Since the method presented in [1] compares with the MMR method from [2], we also present its results.
As we can see, ENOT with $\epsilon=0$ works better than the MM-R solver but slightly underperforms compared to [1].
**(3) Comparison with [3].**
We do not compare with DiffSB since the authors do not consider unpaired translation for the spaces of images larger than grayscale $32$x$32$. In their official GitHub repository there is no config for unpaired translation at all. Moreover, the authors themselves used not the Wiener prior for SB, see [3, Appendix J.3]. After trying to tune hyperparameters which we used for the Colored MNIST setup on our own and obtaining poor results, we decided not to scale this algorithm further to unpaired Celeba setup.
**(4) Including dual form in Sec 2.**
We agree that dual form formulation of OT problems also could be discussed in Section 2 since our proofs are based on it. We will add a discussion about dual OT formulation and methods based on it (such as [1] and [2]) to the main text of the final version.
**(5) Comparison of SDE and ODE.**
Our proposed algorithm has a larger gradient variance when $\epsilon$ is larger, which may affect the final quality. To improve the results, one can use more steps for sampling from SDE or adjust the learning rate. In the paper, we present results for different $\epsilon$ while keeping all the other hyperparameters the same.
**(6) why is there a "1/|f|" term in Alg 1, when computing the KL? And what's $f_{n,m}$? I understand tht $f_n$ is the drift output at time step $n$.**
We use $f_{n, m}$ to denote drift output at time step $n$ for the $m$-th object of the input sample batch. We use the average of the drifts as an estimation of $\int_{0}^{1} ||f(X_t, t)||^2 dt$ in the training objective.
**Concluding remarks.**
We would be grateful if you could let us know if the explanations we gave have been satisfactory in addressing your concerns about our work. If so, we kindly ask that you consider increasing your rating. We are also open to discussing any other questions you may have.
**Additional references.**
[1] Amos,. "On amortizing convex conjugates for optimal transport."
[2] Korotin, et al. "Do neural optimal transport solvers work? a continuous wasserstein-2 benchmark."
[3] De Bortoli, et al. "Diffusion Schrödinger bridge with applications to score-based generative modeling."
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. It would be great to add these clarifications into the revision. I'll update my score. | Summary: Inspired by how Sinkhorn duals are derived the authors adapt said derivation to the path measure and via the disintegration theorem they derive a novel unconstrained min-max objective for solving the Schrodinger bridge problem, the authors then proceed to showcase their method in eOT based tasks, introducing a new gaussian benchmark and displaying competitive results to previous approaches. Additionally, the authors quantify errors in sampling and transport of approximate minimisers to their proposed schemes.
Strengths: 1. The extension of the Sinkhorn dual to the dynamic setting is rather elegant and certainly novel in the way it is carried out
2. The paper is excellent on the presentation side in regards to technical ideas, whilst the contributions are novel/creative they are presented in such a way that understanding them was not difficult.
3. The new formulation allows for a novel duality gap analysis, which is one of the few works analysing learned SBP methods in the approximate setting.
4. From a purely conceptual viewpoint the work is great and rather complete, just some clarifications/additions on the experimental side and motivations could be enhanced.
Weaknesses: Outside of a concern in how different methods are compared (detailed in the questions), this paper is overall well written and has mostly sound experiments.
From the method standpoint, there are several potential weaknesses and lacking ablations which I will detail in the limitations.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Some of the methods you compare to are closed form / non-iterative in the way they solve the half bridges e.g. [1] using GPs. A thing to note is iterative approaches such as SGD can keep resampling from P_0 and P_1 in the toy tasks nonstop, effectively making the dataset set size somewhat proportional to the iterations, this strongly benefits generalisation capabilities, especially in higher dimensions in contrast to a the GP approach in [1] which is likely fitted on a small dataset. If we assume that the dataset is fixed across epochs K_f * batchsize would give a proxy as to the size of the dataset used in the DL approaches , from inspecting the code provided in the supplementary zip in particular the high dim Gaussians and toy examples IPYNB this quantity seems to be above 5000, whilst I suspect for [1] it might be lower. Even then I suspect you resample from the toy distributions (Gaussians) every time as indicated in your pseudocode in which case the dataset sizes are simply not comparable at all and the non-iterative approaches like MLE-SB are prone to be affected significantly by generalisation error (which scales slowly in number of samples for kernel methods, which also don't overcome the curse of dim as well as neural networks) in particular in high dimensions.
2. Are the same number of timesteps used across the DL and the GP approaches when training ? computationally it feels unlikely the gram matrix would fit in memory for the given timesteps used with the DL approaches.
3. A suggestion here would be to compare all approaches in an additional table (e.g. in the appendix) under budgets in the number of steps and number of samples (and fairly ensure these are the same across methods), this can be useful to quantify the data efficiency of each approach. Furthermore, the settings under which [1] was run (steps and dataset size) should be reported.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Finding saddlepoints / solving min-max problems is typical quite a challenging task and was often a huge challenge in the training stability of GANs (very high variance training results). In contrast to IPF which is much nicer from an objective viewpoint (sequence of regression losses), this approach poses the question of how stable training is.
As none of the experiments have error bars (on training runs) it is difficult to see if the proposed approach is robust and “easy” to train. I would suggest to the authors provide such results and in addition maybe comparisons to training loss with Chen 2021 joint or De Bortoli 2021.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer WYHi, Thank you for your comments. Here are the answers to your questions.
**(1) Comparison with MLE-SB.**
Comparing entropic OT methods is difficult because they are based on different principles: IPF-based (MLE-SB, DiffSB, FB-SDE), dual form based (LSOT, SCONES), semi-dual form based (ENOT, ours), and each of them has different hyperparameters that need to be tuned. **It seems like our paper is unique in the field which performs such a comprehensive comparison.** We have taken the most similar hyperparameters to those from the authors' repositories. In our initial results, all the methods (except LSOT and MLE-SB) work well and give good results. LSOT performs poorly because it only learns the barycentric projection, not the EOT plan. At the same time, it was not very clear to us why the results for MLE-SB were bad.
It appears that [1] is the paper in which MLE-SB was first introduced, as it was the only one that used GP for training. Indeed, to ensure a fair comparison of the methods, rather than the way of solving regression problems, it would be better to use a neural network for all methods. This is because GP can be challenging to scale when used for setups with many time steps ($200$ for high-dimensional Gaussians) and a moderate amount of samples.
We reran the experiments for MLE-SB using neural network parametrization of the drift function (as well as for other methods) and using $200$ time steps as we used for all SB methods. We use $1000$ IPF iterations and sample $512$ samples from each distribution $\mathbb{P}_0$ and $\mathbb{P}_1$ at each IPF iteration. **The updated results are shown in Tables 1, 2, and 3 of the attached PDF file.**
As we can see, MLE-SB performs similarly or better than the other IPF-based method (DiffSB) and can also solve the problem in this setup. Now all the considered methods that learn the EOT plan work well on the Gaussian setup, and the goal of this experiment is achieved. The small residual error of all methods seems to be related more to hyperparameter tuning than to the nature of the algorithm used.
**(2) Stability of the proposed method.**
We run our method on the Gaussian setup five times and provide the means and standard deviations in **Tables 1, 2, and 3 of the attached PDF file**. In Figure 1, we also provide the plot of $\text{BW}_2^2\text{-UVP}$ (\%) between the ground truth EOT plan $\pi^{*}$ and the learned plan $\pi$ of ENOT and MLE-SB during training for $DIM=128$. Note that the steps on the plots represent different values due to the different nature of the algorithms. There are IPF steps for MLE-SB and outer problem steps for ENOT (ours). As we can see, both methods stably converge.
**Concluding remarks**.
We would be grateful if you could let us know if the explanations we gave have been satisfactory in addressing your concerns about our work. If so, we kindly ask that you consider increasing your rating. We are also open to discussing any other questions you may have.
**Additional references.**
[1] Vargas, Francisco, et al. "Solving schrödinger bridges via maximum likelihood." Entropy 23.9 (2021): 1134.
---
Rebuttal Comment 1.1:
Title: Very satisfied with rebuttal and the uploaded comparisons.
Comment: Dear Authors,
I have read the rebuttal and can comment that you have clarified and resolved all the issues I raised. In particular, the stability plots and the further fair comparisons brought everything to a very clean conclusion. Additionally, I have also gone over the other reviews and the provided responses, overall it looks pretty satisfactory, the non-trivial prior results were quite a nice addition and comment to the robustness/flexibility of the proposed approach.
I will increase my score accordingly once the platform allows such, at the time being it seems OpenReview is not allowing this until the discussion period is over, I do agree this work does seem to be the most thorough comparison yet for SBP solvers within the eOT context making this paper a substantial contribution beyond the novel approach that is proposed (which is also a solid and self-contained contribution on its own). | Summary: This paper focuses on neural optimal transport and more particularly through a "dynamic schrodinger bridge" approach. I am not an expert in this particular topic, but I must say that the authors manage to make it quite readable and a good introduction to the methodology.
As far as I can tell, the particularity of the approach is to start from a known key constrained optimization problem in eq (11) that has been presented before and that is progressively explained. Then, departing from previous methods, the authors propose a Lagrangian approach, for which we now have an unconstrained problem and the need to train not only the drift function that allows sampling, but also some "beta network" (coined in as the beta network), in a min-max optimization approach, that is similar --but different -- from the usual GAN methodology.
After the rigorous theoretical treatment, the authors present some very nice experiment that seem to support their claims and the interesting features of their approach. I think the paper is quite challenging, but that it is also very stimulating and inspiring.
--
after rebuttal, I am still happy with the paper. Maintaining my score.
Strengths: I feel slightly uncomfortable in assessing whether the proposed method is new or is not exactly or if the authors miss some particular recent and/or relevant method in their references, mostly because I cannot be considered an expert on the topic. Still, it looks to me that they are doing their best at providing a very objective account of the relevant literature and giving many pointers, so I am assuming they can be trusted when they present their contributions.
The proposed method seems quite ok to implement (cf Algorithm 1) and the results are very nice. I am not really aware of the appropriate metrics and usual evaluation criteria that should be used, but I felt compelled by the experiments, which made me want to try things out. I guess this is the most important aspect.
To summarize what I feel are the strengths of the paper:
- good theoretical overview, motivations and derivations
- the resulting method seems simple to implement
- interesting performances
Weaknesses: The paper is a bit difficult to follow, but I don't think it should really be modified to be simpler. I guess it's mostly a matter of myself not being an expert in the topic.
Still, some changes here or clarifications here and there could help. I will mention them below.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - p3 "Hence, one may optimize (5) over processes T for which T|x,y=W_xy for every x, y and set the last term in (6) to zero". How do you actually do that ? By enforcing gaussian dynamics ?
- the paragraph "continuous OT" reads a bit weak to me. All the references are just given in a row, without great care for actually discussing them and their connections with the proposed approach.
In algorithm 1, I have some questions.
- do you actually need to store the gradients when computing {X_n, f_n} for the computation of L_\beta ? It looks to me that Eul-Mar(X_0, T_{f_\theta}}) can actually run in some `no_grad` environment, or am I mistaken ? This would mean there is some possibility in strongly parallelizing this or even use some external workers to compute that ?
- On the contrary, in the inner loop (over k), you really need to store the gradients, right ? Is it necessary to use many inner iterations K_f ? Maybe I'm mistaken but I don't see that discussed, although this looks like a key computational burden when I have a look at algorithm 1, right ? Is it feasible to just record gradient for the last k steps ? for some of them ? Some thing that could help going faster ?
- I have no clue what this BW2-UVP metric is. Could you at least give us a hint that would help us avoid checking reference [25] ?
- You must check the references. Most of them are badly formated. You have "schr\"odinger" everywhere and you are missing many uppercases for proper nouns
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I would say that the limitations are clearly mentioned
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer LQN6, thank you for your comments. Here are the answers to your questions.
**(1) p3 "Hence, one may optimize (5) over processes $T$ for which $T_{|x,y}=W_{|xy}$ for every $x, y$ and set the last term in (6) to zero". How do you actually do that ? By enforcing gaussian dynamics ?**
We did not intend to do that in practice. And we do not do it in our algorithm. We just noticed that it can be considered as such a constrained optimisation. In fact, there seems to be no straightforward way to parameterise such a family of processes. However, as we noted in the main text (lines 76-78), if $T^*$ is a solution to SB with prior $W^{\epsilon}$ for any two distributions $\mathbb{P}\_0$ and $\mathbb{P}\_1$, then $T^*\_{|x,y} = W^{\epsilon}\_{|x,y}$ [28, Proposition 2.3].
**(2) the paragraph "continuous OT" reads a bit weak to me. All the references are just given in a row, without great care for actually discussing them and their connections with the proposed approach.**
Since many recent papers appeared on continuous OT, we focused the discussion only on the most relevant EOT and SB papers and only briefly mentioned the rest. Following your suggestion, we will extend the discussion of methods for solving other types of continuous OT (unregularized OT).
**(3) do you actually need to store the gradients when computing ${X_n, f_n}$ for the computation of $L_\beta$ ? It looks to me that Eul-Mar($X_0$, $T_{f_\theta}$) can actually run in some no grad environment, or am I mistaken ? This would mean there is some possibility in strongly parallelizing this or even use some external workers to compute that ?**
We do not have to store the gradients in the computation of ${X_n, f_n}$ for the computation of $L_\beta$ in Algorithm 1, and we indeed do not store them in our implementation. However, we have not yet tried to use computational schemes with external workers in our work. We appreciate the idea and will try it.
**(4) On the contrary, in the inner loop (over k), you really need to store the gradients, right ? Is it necessary to use many inner iterations $K_f$ ? Maybe I'm mistaken but I don't see that discussed, although this looks like a key computational burden when I have a look at algorithm 1, right ? Is it feasible to just record gradient for the last k steps ? for some of them ? Some thing that could help going faster ?**
We have to store them, but we can easily solve the memory problem using gradient checkpointing [1]. We have implemented it in our code. The hyperparameter $K_f$ affects the quality of the solution of the inner problem and can be tuned. We noticed that $K_f=1$ is too small, and the algorithm can easily diverge, but for $K_f=10$, the problem vanishes. We noticed in the limitations section that backpropagations through SDE may be computationally heavy. More efficient SDE solvers can be used to overcome this limitation.
Another possibility is to consider approaches from stochastic optimal control or reinforcement learning, since with the fixed potential $\beta$, the inner problem can be formulated as a problem from these fields, as noted by **reviewer KyaP**. For example, one could consider $-||f(X_t, t)||^2\Delta t$ as a reward in an intermediate step and $\beta(X_{predicted})$ as an additional reward in the final step. In this case, one could use Q-learning or Actor-Critic approaches to learn from a segment of a trajectory without propagating through the whole trajectory.
**(5) I have no clue what this BW2-UVP metric is. Could you at least give us a hint that would help us avoid checking reference [25] ?**
It is the Wasserstein-2 distance between distributions $\mathbb{P}$ and $\mathbb{Q}$ that are coarsened to Gaussians and normalized by the variance of the distribution $\mathbb{Q}$:
$$
\text{B}\mathbb{W}\_{2}^{2}\text{-UVP}\big(\mathbb{P}, \mathbb{Q} \big) = \frac{100 \\%}{\frac{1}{2}\text{Var}(\mathbb{Q})} \mathbb{W}\_{2}^{2} (\mathcal{N}(\mu\_{\mathbb{P}}, \Sigma\_{\mathbb{P}}), \mathcal{N}(\mu\_{\mathbb{Q}}, \Sigma\_{\mathbb{Q}})).
$$
We will add this definition to the main text.
**(6) You must check the references. Most of them are badly formated. You have "schr"odinger" everywhere and you are missing many uppercases for proper nouns.**
Thanks so much for pointing out this. We will check and correct the references.
**Concluding remarks.**
We would be grateful if you could let us know if the explanations we gave have been satisfactory in addressing your concerns about our work. We are also open to discussing any other questions you may have.
**Additional references.**
[1] Chen, Tianqi, et al. "Training deep nets with sublinear memory cost." arXiv preprint arXiv:1604.06174 (2016).
[2] Christian Léonard. A survey of the schr\" odinger problem and some of its connections with
optimal transport. | Summary: The main idea of the paper is to estimate a stochastic map for the entropic optimal transport problem using its connection to the dynamic Schrödinger bridge (SDB) problem. The authors formulate the SDB as a saddle point problem of an associated Lagrangian. Then they recover the transport plan as the joint distribution of the solution to the dynamic Schrödinger bridge problem's initial and final values, while the transport is encoded in the drift term of the learned stochastic process that is the solution of the SDB. Compared to previous methods, the method at hand offers more stability for small entropic regularization coefficients. The errors on the drift term solution and on the transport map are quantified given the corresponding duality gaps of the inner and outer optimization problems of the saddle point objective. Finally, the approach is supported by experimental evaluation.
Strengths: The paper is extremely well-written and reader-friendly.
- Several remarks are made to facilitate reading and to provide intuition about technical notions.
- A guarantee on the quality of the saddle point solution is provided.
- Addressing the small $\epsilon$ case, which is a source of instability of several other methods.
Weaknesses: * In line 223, it is mentioned that the negative entropy is not strongly convex. This is false as the function $p\mapsto x\ln(x)$ has second derivate $x \mapsto \frac{1}{x}$ which is bounded from below by $1$ on the interval $[0,1]$. See for example Section 4.1 of reference [2]. As a result, the comparison to [1] (reference [5] in the paper) needs to be reconsidered.
* There is no concluding section.
* Some experimental section metrics are not introduced.
* The $\rm{BW_2^2-UVP}$ is not introduced even in the appendix, although a reference for it is given.
* FID: the previous remark applies.
* The parametrization $g(X_t,t) = X_t + f(X_t,t)\Delta_t $ should have been indicated in the main paper rather than in the appendix as although it is mathematically equivalent to the parametrization presented in the main paper, it allowed better results on the CelebA dataset according to the appendix
## Minor remarks
* Problem with links: For some reason, the bibliographic references links along with links to equations and sections etc. are not working.
* $\pi^{W^\epsilon}$ is introduced for the first time in Equation (8) without being defined.
* $W_{|x,y}$ is not explicitly defined, although one can infer its meaning from the definition of $T_{|x,y}$.
* Theorem 4.1: I think it should be "every pair $(\beta^*,T_{f^*})$ for (13)" rather than "for (12)" as problem (13) is a saddle point problem.
* Reference to Algorithm 2: in line 195, Algorithm 2 is referenced. However, it is not indicated that it is written in the appendix.
* Suggestion: index $m$ can be removed in the "$\widehat{KL} \leftarrow$" line of Algorithm 1 since the sum terms are already indicated to be the values of $f_n$, or it is possible to indicate $\sum_{m=1}^{|f_n|}$.
* Line 158: I think it should be added "that is bounded from above" to "a continuous function".
## References
[1] Asadulaev, A., Korotin, A., Egiazarian, V., & Burnaev, E. (2022). Neural optimal transport with general cost functionals. *arXiv preprint arXiv:2205.15403*.
[2] Peyré, G., & Cuturi, M. (2019). Computational optimal transport: With applications to data science. *Foundations and Trends® in Machine Learning*, *11*(5-6), 355-607.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * Computation of $\mathbb{E}\left[\int_0^1\Vert f(X_t,t)\Vert^2{\rm d}t\right]$ : in line 197, it is indicated that the mean of the $f(x,t)$ is used. Is this justified by the Riemann integral discrete approximation? If so, does a trapezoidal rule for approximating the integral improve the result?
* Is it straightforward to generalize the approach to costs other than the squared Euclidean distance? Does it fundamentally change the nature of the associated stochastic process
* Did the authors try to apply the method to the domain adaptation (DA) problem as several DA methods rely on optimal transport ?
----------
I have read the authors's rebuttal. They have addressed my concerns.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The impact of the paper and the limitations of the contribution are clearly discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer TSgQ, thank you for your comments. Here are the answers to your questions.
**(1) In line 223, it is mentioned that the negative entropy is not strongly convex. This is false as the function $p\mapsto x\ln(x)$ has second derivate $x \mapsto \frac{1}{x}$ which is bounded from below by $1$ on the interval $[0,1]$. See for example Section 4.1 of reference [2]. As a result, the comparison to [1] (reference [5] in the paper) needs to be reconsidered.**
Recall that the negative (differential) entropy of a distribution with density $p(x)$ is given by $-H(p) = \int p(x)\log p(x)dx$. The negative entropy is indeed $\frac{1}{M}$-strongly convex if we only consider distribution $p$ whose density $p(x)$ is bounded by a constant $M>0$ (this follows from your argument). However, we work with general continuous distributions ($\mathbb{P}_0,\mathbb{P}_1,\widehat{\pi}$, etc.) which may not satisfy this assumption. For example, $\mathcal{N}(x|0,\sigma)$ for small $\sigma$ may have density at $0$, which is greater than any given constant $M$. Therefore, the differential entropy is not $\frac{1}{M}$-strongly convex for any $M>0$.
**(2) Some experimental section metrics are not introduced.**
*We will add the explanations and definitions of the metrics to Appendix.*
**(3) The parametrization $g(X_t,t) = X_t + f(X_t,t)\Delta_t $ should have been indicated in the main paper rather than in the appendix as although it is mathematically equivalent to the parametrization presented in the main paper, it allowed better results on the CelebA dataset according to the appendix.**
*We will include a comment regarding the parametrization utilized in the primary text (to Section 4.2).*
**(4) Problem with links: For some reason, the bibliographic references links along with links to equations and sections etc. are not working.**
Thank you for pointing this out. We will fix it.
**(5) Minor remarks.**
Thank you for your comments. We will fix all the issues and think it will further improve the clarity.
**(6) Computation of $\mathbb{E}\left[\int_0^1\Vert f(X_t,t)\Vert^2{\rm d}t\right]$ in line 197, it is indicated that the mean of the
is used. Is this justified by the Riemann integral discrete approximation? If so, does a trapezoidal rule for approximating the integral improve the result?**
Yes, we used the mean as a discrete approximation of $\mathbb{E}\left[\int_0^1\Vert f(X_t,t)\Vert^2{\rm d}t\right]$. We have not tried to use other types of approximation of this integral that might improve the proposed algorithm. Thank you for your suggestion.
**(7) Is it straightforward to generalize the approach to costs other than the squared Euclidean distance? Does it fundamentally change the nature of the associated stochastic process.**
Yes, it is quite straightforward. We focus only on EOT with the quadratic cost $c(x,y)=\frac{1}{2}\|x-y\|^{2}$ which coincides with SB with the Wiener prior $W^{\epsilon}$. However, one could use a different prior $Q_v$, given by the SDE:
$$
Q_v: dX_t = v(X_t, t)dt + \sqrt{\epsilon}dW_t,
$$
and solve the problem
$$
\inf_{T_f \in \mathcal{D}(\mathbb{P}_0, \mathbb{P}_1)} \text{KL}(T_f || Q_v) = \inf\_{T\_f \in \mathcal{D}(\mathbb{P}\_0, \mathbb{P}\_1)} \frac{1}{2\epsilon} \mathbb{E}\_{T\_f}[\int\_{0}^1 ||f(X_t, t) - v(X_t, t)||^2 dt].
$$
Here we just use the known expression to $\text{KL}(T_f|| Q_v)$ between two diffusion processes through their drift functions as we did in our paper. Using the same derivation as in our paper, it can be shown that this new problem is equivalent to solving the EOT with cost $c(x,y) = -\log \pi^{Q_v}(y|x)$, where $\pi^{Q_v}(y|x)$ is a conditional distribution of the stochastic process $Q_{v}$ at time $t=1$ given the starting point $x$ at time $t=0$. For example, for $W^{\epsilon}$ (which we used) we have $c(x,y) = -\log \pi^{W^{\epsilon}}(y|x) = \frac{1}{2 \epsilon}(y-x)^T(y-x) + \text{Const}$, i.e., we get the quadratic cost. Thus, using different priors for the Schrodinger bridge problem makes it possible to solve Entropic OT for other costs.
It seems that all our proofs can be extended to any prior process $Q_v$ just by slightly changing the minimax functional:
$$
\sup\_{\beta} \inf\_{T\_{f}} (\frac{1}{2\epsilon} \mathbb{E}\_{T\_f}[\int\_{0}^1 ||f(X\_t, t) - v(X\_t, t)||^2 dt] + \int\_{\mathcal{Y}} \beta\_{\phi}(y) d\mathbb{P}\_1(y) - \int\_{\mathcal{Y}} \beta\_{\phi}(y) d\pi\_1^{T\_f}(y)).
$$
We conducted a toy experiment to support this claim and consider $Q_v$ with $\epsilon=0.01$ and $v(x, t) = \nabla \log p(x)$, where $\log p(x)$ is a 2D distribution with a wave shape, **see figure 2 of the attached pdf.** Intuitively, it means that trajectories will be concentrated in the regions with a high density of $p$. In Figure 2, There the grey-scale color map represents the density of $p$, start points ($\mathbb{P}\_0$) are green, target points ($\mathbb{P}\_1$) are red, obtained trajectories are pink and mapped points are blue.
**(8) Did the authors try to apply the method to the domain adaptation (DA) problem as several DA methods rely on optimal transport?**
No, we did not try.
**Concluding remarks.**
We would be grateful if you could let us know if the explanations we gave have been satisfactory in addressing your concerns about our work. If so, we kindly ask that you consider increasing your rating. We are also open to discussing any other questions you may have.
**Additional references.**
[1] Asadulaev, A., Korotin, A., Egiazarian, V., \& Burnaev, E. (2022). Neural optimal transport with general cost functionals. arXiv preprint arXiv:2205.15403.
[2] Michele Pavon and Anton Wakolbinger. On free energy, stochastic control, and schrödinger
processes. In Modeling, Estimation and Control of Systems with Uncertainty: Proceedings of a
Conference held in Sopron, Hungary, September 1990, pages 334–348. Springer, 1991.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Dear Authors,
I would like to thank you for the detailed and very well written rebuttal. It addressed all of my concerns. I update my score. | Rebuttal 1:
Rebuttal: Dear reviewers, thank you for taking the time to review our paper.
Your valuable feedback and constructive comments are greatly appreciated. We are particularly pleased that all reviewers found our paper well-written and easy to read (TSgQ, LQN6, WYHi, KyaP). We are also pleased that you find our duality gap analysis and guarantee of the quality of the saddle point solution important (TSgQ, WYHi), that our experiments are extensive and include many related baselines (KyaP), and that the whole work is great and quite complete (WYHi).
Please, find the answers to your questions below. **Please note that we have added tables and figures in the attached pdf to support our responses to the reviewers WYHi, KyaP, and TSgQ.**
Pdf: /pdf/756ed1a14ee304765e8cd12278840eb8216bbace.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Calibrated Stackelberg Games: Learning Optimal Commitments Against Calibrated Agents | Accept (spotlight) | Summary: This manuscript introduces the concept of calibrated forecasts in repeated Stackelberg games (SG) and proposes two concepts: the calibrated Stackelberg games (CSG) that generalizes the standard SGs and the adaptive calibrated forecast. The technical contribution is as follows: First, a principal's learning algorithm against adaptively calibrated agents is proposed where the average utility of the principal converges to the Stackelberg value, which is the best possible average utility under this setting. Second, a forecasting algorithm that meets the concept of calibrated forecast is proposed, which means that the agents can actually perform calibrated forecasts for the principal's action. Third, even for continuous Stackelberg games where the action sets of principal and agent are continuous, there will be principal's learning algorithm against certainly adaptively calibrated agents where the average utility of the principal converges to the Stackelberg value, the best possible one.
Strengths: The notion of calibrated forecast, on which the paper is entirely based, seems much reasonable; compared to the original definition, it is generalized by introducing a binning function, enabling us to represent some rules for choosing the best response of agents (deterministic/randomized tie-breaking rule in the manuscript). Although its assumption seems much stronger and unrealistic, this work succeeded in proposing forecasting algorithms that meets the conditions of adaptive calibration by combining online algorithms with the study on novel game dynamics; its technical contribution seems far non-trivial. For the principal, this work succeed in proposing learning algorithms that achieves the best possible average utility asymptotically, meaning that the proposed learning algorithm is asymptotically the best one under this setting.
Weaknesses: The connection between the proposed concepts (calibrated Stackelberg games and adaptive calibrated forecast) and the applications the manuscript claims (Stackelberg security games and strategic classification) is unclear from the manuscript although it is claimed that the results "immediately apply". Thus, it looks like that this work addresses an artificial setting in Stackelberg games. To avoid this, the authors should carefully review the connection between the proposed concepts and the applications at least in the Appendix.
Minor comment:
In Theorem 5.2, unlike Theorem 3.1, the binning \Pi_0 is fixed as Eq. (25), but it is described only in the Appendix. As far as I read the main part, I'm afraid that the binning is irresponsible and not related to the agent's policy. For the sake of completeness, please consider describing the actual formula for the binning and its meaning in the main article.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: P.3, l.128: "Definition 2.2 is weaker than the standard definition of calibration..." What does "weaker" mean? As far as I understand, introducing a binning function \Pi is a generalization compared to the standard definition but it does not strengthen or weaken the assumption.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors describe some limitations (and thus future directions) of this work in the conclusion section and in Appendix F.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's positive feedback on our paper. We’re happy to see that the reviewer recognizes the importance of addressing Stackelberg games through calibration, moving beyond traditional assumptions. We address the specific questions in the subsequent paragraphs:
**Stackelberg security games and strategic classification as applications of CSG**
Thank you for the suggestion. We'll add the following paragraph to the revised version of our paper to explain how security games and strategic classification falls under the framework of CSGs:
- **Stackelberg security games (SSG)** are a prominent example of finite Stackelberg games that captures the strategic interactions between an attacker (or agent) and a defender (or principal). In an SSG, the defender commits to a probabilistic distribution of security resources across $k$ targets, and the attacker best responds to it by attacking the target that maximizes its utility. Specifically, the defender's action space $\mathcal{A}_P$ includes the finite set of schedules, where each schedule is the subset of targets that can be simultaneously defended by one security resource. The attacker's action space $\mathcal{A}_A$ is the finite set of targets $[k]$ that the agent can attack. Transitioning this into our Calibrated Stackelberg Game (CSG) model, we depart from the traditional assumption that the attacker has perfect knowledge of the defender's strategy. Instead, we consider realistic attackers that base their decisions on calibrated forecasts of defender's strategies.
- **Strategic classification** is an application of continuous SGs where the principal aims to learn classifiers that remain robust when agents strategically manipulate their features to receive positive classifications. We use the model of [Dong et al, EC 2018] to illustrate how it falls under the framework of continuous SGs. In this model, during every round of interaction, the principal commits to a linear classifier $\mathbf{h}_t\in\mathbb{R}^d$, then the agent with initial feature $\theta_t\in\mathbb{R}^d$ best responds to the classifier $\mathbf{h}_t$ by modifying its feature to $\tilde{\theta}_t=\theta_t+w_t\in\mathbb{R}^d$ that maximizes the dot product of $\langle \tilde{\theta}_t,\mathbf{h}_t\rangle$ minus the cost of movement $c_t(w_t)$. The principal's utility is the negative logistic loss or hinge loss. In the above model, the agent has full knowledge of the classifier $\mathbf{h}_t$ before manipulating its features, which might be unrealistic for applications like college admissions where the classification rules are opaque to the agent. Our CSG framework relaxes this assumption by allowing agents to best respond to beliefs about the classifier $\mathbf{h}_t$ generated by any calibrated forecasting algorithm. Our results applied to this setting show that when agents’ features are drawn from a stochastic distribution, the principal's optimal average utility is captured by the same Stackelberg value obtained when agents have direct knowledge of $\mathbf{h}_t$.
**Binning function in the continuous setting**
While the continuous setting’s binning may appear less intuitive, it is still related to the agent’s policy. In the discrete setting, the grouping of the principal's strategies into bins is based on whether they induce the same best response from the agent. When extending this to a continuous setting, the concept remains grounded in the agent's best response, but with some adjustments to accommodate the continuous action space. Specifically, we define the binning function based on whether the induced best responses are in close proximity rather than being identical. This is achieved by first creating an $\varepsilon$-net of the strategy space, followed by smoothing the discontinuous net into continuous $\Lambda$ functions that serve as the bins after normalization. Intuitively, the $\Lambda$ functions, defined as $\Lambda(p)=(R-\\|p-x\\|_2)\_+$ for all $x$ in the net, resemble the shape of "tents". These functions peak at $x$ and smoothly decrease to 0 as $p$ moves away from $x$. Consequently, strategies sharing a nonzero $\Lambda$ function (or equivalently nonzero binning function) must be close in $l_2$ distance. This, in turn, implies that their best responses must also be close, given the assumption that the BR function is Lipschitz. In this manner, the cover still respects the distance metric regarding the agent's response, ensuring that the binning is not irresponsible or unrelated to the agent's policy, but rather a nuanced adaptation to the continuous setting. We will further clarify this in the final version of the paper.
**Binning as a weakening of the standard calibration assumptions**
While the introduction of binning is indeed a generalization of the standard calibration definition, it's a definition that can only weaken the assumption, because binnings serve as a coarsening of the representation of functions on which the agent wants to achieve vanishing calibration error.
In fact, the finest-grain bin, which corresponds to the indicator function $w_{\mathbf{p}}(x)=\mathbf{1}\\{x=\mathbf{p}\\}$ for every possible strategy $\mathbf{p}$, would lead to the standard definition of calibration as it inspects the calibration error for every strategy independently; and calibration wrt this binning immediately imply calibration wrt all other normalized binnings. Given our approach of defining the binning based on the agent's best response function, our generalized calibration assumption is weaker than the finest-grained binning and forms a realistic assumption that the agents can easily achieve.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for detailed reply. I adequately understand the connection between the proposed concepts and the applications (SSG and strategic classification). I think the authors' claim that the proposed concepts fit the described applications is appropriate. In addition, the question I raised is adequately resolved. Thus, I'm still in favor of accepting this manuscript.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our rebuttal. We are glad that our response addressed your questions. We'll use the additional page to review the connections between our proposed CSG framework and relevant applications. | Summary: In this work, the authors consider a problem of Calibrated Stackelberg Games (CSG), which is a generalization of the Stackelberg Games. These framework differ from the standard online learning problems as in the SG framework instead of only having a single learner entity, there is a principal and an agent. The key difficulty introduced by the CSG framework is that the agent needs to respond to the action of the principal without being able to observe it. Instead, the agent have to forecast what they expect the principal's action to be and aim to best respond to that believed action from the principal.
As this learning objective is more challenging than classic learning problems, the authors consider the question of whether the principal can achieve optimal utility (V* being the utility of the principal's action with the highest best response) in this calibrated game.
An important part of the paper is dedicated to the construction of adaptively calibrated forecasts and of the CSG protocol. Then, the authors present and analyse an algorithm that can achieve optimal utility with high probability.
The principal's algorithm is a simple explore then commit strategy. This exploration starts with $log T/\eta$ uniform uniform strategies samples, for which the agent returns an associated response. Then the principal tries to find approximately optimal strategies for each of the agent's response, and ends up picking the one of these that yields the highest utility.
They then analyze the performance of the algorithm and show that it assumptotically reaches the optimal utility for discrete and continuous games.
Strengths: This paper studies a generalization of the Stackelberg games, which are challenging in the online learning framework, and show that it is possible to reach optimal utility for the principal even if the agent doesn't have access to the strategy picked by the principal ahead of time.
These results are novel, and the analysis of the algorithm builds upon standard online learning algorithms. The authors took particular care in connecting the CSG framework with other online learning problems such as sleeping experts.
This work provides some good preliminary baselines for the SCG problem, and should provide a strong foundation for future works to build upon it.
Weaknesses: The main weakness of the paper is that the results provided, meaning that the algorithm presented can find the optimal utility, only holds asymptotically, making it difficult to appear useful in practice, when we only have a limited time-horizon.
It would be useful to discuss extensions of this work that could achieve stronger guarantees in finite time horizons.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Do you think that it is possible to extend your work beyond the asymptotic guarantees that you provide?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of our work.
**On whether Nonasymptotic guarantees are possible**
Yes, absolutely! While we state our results in asymptotic forms in the main body, we have already provided non-asymptotic guarantees for finite time horizon in the appendix. Specifically, please refer to Thm C.14 in Appendix C.6 for a more formal version of Thm 3.2. We opted to defer these detailed rates to the appendix due to their complexity and the potential difficulty in interpretation for general games, but we are happy to instantiate them for specific settings like security games and present them as corollaries in the main body.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer, I had indeed overlooked that part of the theorem. | Summary: The paper defines and studies a new Stackelberg games setup. Rather than making some standard assumptions --- e.g. that the principal and/or the agent exhibit specific types of play (e.g. agent playing no regret), or assuming access to the agent’s best response oracle, etc. --- this paper only assumes that the agent will be playing by best-responding to (appropriately) calibrated forecasts of the principal play. In this Calibrated Stackelberg Games setup, the authors show that: (1) The principal, by only knowing that the agent will be playing in a calibrated way, can achieve exactly (no more and no less than) the Stackelberg value of the game over a repeated interaction, by following an explore-then-commit style algorithm; (2) The agent has an efficient algorithm for producing said calibrated forecasts (in fact, strengthened by the notion of adaptivity --- meaning that calibration should hold over all subintervals of the time axis). These results are further complemented by e.g. considering both finite and continuous action spaces, as well as an adaptive calibration algorithm.
Strengths: As intended by the authors, they are able to demonstrate that Calibrated Stackelberg games indeed show potential towards relaxing/moving away from various restrictive assumptions (on what the principal and the agent observe and how they play) in the literature. An important moral takeaway is that the same old Stackelberg value --- which one might have expected might in fact require the principal and the agent to (more or less) explicitly observe each other’s play --- can be achieved over time by only asking the agent to use calibrated forecasts of the principal’s play (which is a fairly mild requirement), and the principal to know and use the fact that the agent exhibits calibration.
In fact, this conclusion is carefully shown to hold (existentially and algorithmically) both for finite and continuous action spaces --- which, while to be morally expected from standard minimax reasoning, still takes work to rigorously establish and shows thoroughness on the authors’ part.
In terms of the techniques employed, the exploration process for achieving this value on the principal’s side is, in fact, not very straightforward to design both for finite and continuous action spaces; so, there is a healthy dose of sophistication involved. The adaptive calibration algorithm, on the other hand, quite straightforwardly follows from existing techniques at the intersection of no-regret dynamics and online multiobjective settings.
Weaknesses: I did not spot any technical weaknesses, and the contribution of this paper to both the Stackelberg games and the calibration literatures is solid. So overall, the paper does not have any significant weaknesses. However,
I would, however, like to point out that the principal needing, in certain parts of this new framework, to know the calibration rate of the agent is not to be taken lightly given that an important motivation of this paper is to relax, as much as possible, the prevalent specificity, in existing literature, of the requirements on what the principal and the agent should know. I would appreciate it if the authors could provide further elaboration of this, beyond the brief mention in the conclusion.
Secondly, while having *adaptive* calibration guarantees is nice, compared to regular marginal ones, I’m not clear on how or whether this adaptivity interacts with the proposed theory of CSG or is more or less orthogonal? In other words --- okay, using the standard sleeping experts technique for establishing adaptive online guarantees, it is possible to make the agent calibrated on every [s, t] rather than just on [1, T]; but does that really matter for e.g. being able to achieve the value V* in the process of play, or any other desirable Stackelberg properties? Since the main point of the paper is to propose a theory of calibrated Stackelberg games, it is important to be clear on whether this is an essential element of such a theory or was just added to the paper for good measure. If this is in fact an essential element, I’d like to ask the authors to clarify this.
I also did find the presentation suboptimal --- the paper reads quite densely; especially, in my experience, when it comes to the proof sketch after Theorem 3.2 in Section 3. I invite the authors to revamp that part of the presentation for the rebuttal phase. Some specific things that I’d appreciate would be (1) alleviating the notational/explanatory tedium related to condition (P1) --- I am still not clear on how strong or weak it is, and where exactly things must break if it wasn’t required; (2) adding a high-level description (preferably involving more prose) of how the agent’s calibration figures into what the algorithm for the principal does.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: For the substantive questions, please see my questions above on: (1) the principal needing to know the calibration rate; (2) the requirement of adaptivity of calibration; (3) some elements of Section 3 such as property (P1) etc.
Here, I’ll quickly list a sampling of a few typos and notational issues:
Line 300: bound*ed*
Line 285: Where is the notation L_g defined?
Line 276: Brackets around sigma in the subscript for g
Line 222: Probably meant to say that (h hat, y) is an equilibrium rather than just h hat
Line 201: Where is h bar_T defined?
Line 174: The fundamental constructs from Definition 2.2 are reviewed, not from Eq 1
Line 102: converged *to a* Stackelberg equilibrium
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and insightful comments. Below we address the specific questions.
**The principal needs to know the agent’s calibration rate**
We will add a more detailed discussion to our paper.
On the one hand, the principal does *not* need to know the agent's exact calibration rate. Knowing an upper bound or approximate value suffices for learning a near-optimal commitment against calibrated agents. Given that the calibration error has to be small and vanishing for our setting, we think that assuming some knowledge of an appropriate upper bound on a small calibration error provides a good tradeoff between relaxing the assumptions found in previous literature and still obtaining provable guarantees in Calibrated Stackelberg games.
On the other hand, if the principal is completely unaware of the calibration rate or the calibration error is large, several complications arise. Since the agent's beliefs might come from any forecasting algorithm, their (average) responses to the principal's strategies may be suboptimal. This uncertainty complicates the exploration phase, where the principal must decide between improving feedback accuracy by repeating the current strategy or updating the strategy based on existing feedback. Therefore, we believe that some degree of knowledge regarding calibration rate is necessary, and proving this necessity is an intriguing question for future research. Moreover, if the calibration error is large (and thus harder to justify having an approximate upper bound), the agent’s behavioral model can fit any online algorithm. It is known that in such situations, the principal’s utility is not characterized by $V^*$. For example, during a commit phase, even playing the optimal strategy $h^*$ may lead to high suboptimality if the agent doesn't best respond due to high calibration error of the forecasts.
**The requirement of adaptivity**
We believe that lack of adaptive guarantees poses a technical challenge for the design of learning algorithms and likely impacts the convergence rates to $V^*$. At a high level, adaptivity ensures that each exploration step (referring to a sub-interval) is approximately correct, enabling the principal to more effectively search the space and find a final strategy that is both robust and near-optimal.
More specifically, recall that during the exploration phase of the Explore-Then-Commit algorithm, the principal first learns the optimal commitments within each robustified best response polytope, then selects the one with the highest utility. In this process, the key challenge is determining whether a strategy robustly lies within a specific best response polytope. We address this by sampling test strategies around the queried one, playing them repeatedly to find an approximate best response. This is enabled by adaptive calibration within time subintervals $[t_1,t_2]$ where each test strategy is played. Without the adaptivity property, however, ensuring an approximate best response would require a much larger $t_2$ relative to $t_1$ to make the interval $[0,t_1)$ negligible compared to $[0,t_2]$, where marginal calibration applies. This increases the repetitions for each test strategy. While this is not a proof of an algorithm-independent lower bound on the convergence rate, we wouldn’t be surprised if such a proof could be formalized that indicates that the lack of adaptivity property would result in the convergence rate becoming much worse, possibly exponentially. We believe this is an interesting question for future work.
**The presentation in Section 3**
We plan to enhance the clarity and structure of Section 3.1 as follows:
- **Elaborating on the objectives for Explore phase**: Before delving into the notations $B\_2(S,\varepsilon)$ and $B\_2(S,-\varepsilon)$, we'll overview what we aim to achieve in the Explore phase:
- **Idealized Setting**: Initially, we'll consider a setting with zero calibration error, where the agent's forecasting algorithm is perfectly and adaptively calibrated, leading to $y\_t=\text{BR}(h_t)$ at every round. Within the Explore phase, the task simplifies to identifying a near-optimal strategy through best response oracles, satisfying $U_P(\tilde{h},\text{BR}(\tilde{h}))\ge V^*-\varepsilon\_1$ for a predetermined $\varepsilon\_1$. In the Commit phase, given that the agent always plays $\tilde{y}=\text{BR}(\tilde{h})$, the Stackelberg regret can be upper bounded by $\varepsilon\_1|T\_2|$. Hence, the Explore-Then-Commit algorithm's regret is bounded by $V^*|T_1|+\varepsilon_1|T_2|$.
- **Realistic Setting**: Moving away from the idealized setting, we must account for possible discrepancies between $y\_t$ and $\text{BR}(h_t)$ due to calibration error. This introduces: (1) An increased sample complexity in the Explore phase, given the necessity to learn a near-optimal strategy from noisy responses; (2) Potential deviations from the action $\tilde{y}=\text{BR}(\tilde{h})$ due to miscalibrations in belief. To address the first challenge, we employ Algorithm 2 which constructs an *approximate* best response oracle by repeatedly interacting with a calibrated agent. For the second challenge, we require our learned policy $\tilde{h}$ to be robust against inaccurate forecasts. This is reflected in condition (P1), which necessitates the ball of radius $\varepsilon_2$ around $\tilde{h}$ to be fully contained in the polytope $P_{\tilde{y}}$. The critical insight from (P1) is: for any forecast $p_t$ that results in a best response $y_t\neq\tilde{y}$, there must be a minimum distance of $\varepsilon\_2$ separating $p_t$ from $\tilde{h}$. Combined with the definition of calibration error, this relationship allows us to establish an upper bound on the number of such rounds.
- **Added figure**: We'll add two figures illustrating the notations $B\_2(S,\pm\varepsilon)$ and the relation between $\tilde{h}$ and $\bar{p}_{T_2}$. The PDF containing the figures is attached to the global response.
---
Rebuttal Comment 1.1:
Title: Acknowledgment
Comment: Thank you to the authors for providing a detailed and informative response to my 3 main questions.
The point regarding the potential necessity of adaptivity for obtaining good/improved convergence rates is interesting, and I now agree that adaptivity fits into the scope of the manuscript sufficiently naturally.
Also, the reworked paragraph on the specifics of the algorithmic contribution in Section 3.1 is much appreciated, and the attached graphics are clean and informative.
Therefore, I've increased my score for the paper and maintain my positive opinion of it.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our rebuttal and for increasing the score! | null | null | Rebuttal 1:
Rebuttal: Please refer to the attached PDF for the added figures.
Pdf: /pdf/d47477467891d068fd3532dfd46daae32d9e5910.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Switching Autoregressive Low-rank Tensor Models | Accept (poster) | Summary: This paper introduces a new unsupervised probabilistic model for time series analysis: the Switching Autoregressive Low-rank Tensor (SALT). It combines a Low Rank Tensor parametrization of autoregressive (AR) models with switching Dynamics. The two main contributions are:
1) The SALT model itself. Although the tensor parametrization is not new, they succesfully augment it with switching dynamics and derive an EM-based learning and inference algorithm with closed form updates.
2) A theoretical connection between Linear Dynamical Systems (LDS) and Low-Rank Tensor Autoregression.
The model is evaluated on simulated, behavioral and neural datasets.
Strengths: The paper is very well written despite the need for numerous notations. The author fluently expose their model and relate it to existing approaches.
The link between LDS and Low-Rank Tensor AR models is important and novel (to my understanding). Not only it helps grasping the relationship between those two models, but it also provides a bound linking the dynamics spectrum, the tensor rank and the approximation error one makes when using the tensor AR model. Among other things, I believe it can guide the specification of the tensor rank, which is a non trivial operation.
SALT builds upon existing approaches, and although it is not more expressive than sLDS, inference and learning use closed form updates which is very convincing from a practical perspective. Importantly, it retains the possibility to analyze a low dimensional representation of the observed time series.
Both simulated and real world experiments are convincing and the code is provided.
Weaknesses: 1) Title 3.3 is confusing. I agree that the theoretical connection discovered by the author is significant, but it does not concern SALT and sLDS. The link is between LDS and Low Rank Tensor AR models.
2) I think the graphical of Figure 1 is wrong. Why isn't there any arrow between $z_t$ and $z_{t+1}$ despite $z_{t+1} \sim \text{Cat}(\pi^{(z_t)})$ ?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1) Would it be possible to concatenate the 3-way autoregressive tensors $\mathcal{A}^{h}$ in a 4-way low rank tensor $\mathcal{A}$ of size $N \times N \times L \times h$ ? This might yield an even more compact description of the data.
2) I understood Proposition 1 was a novel contribution from the authors. If it's indeed the case, it's worth mentioning it (more) explicitly in the introduction.
3) How stable are the CP-SALT discovered factors from one initialization to the next ? It could be interesting to look at the factors directly to characterize dynamics, but I think the discovered dynamics could be roughly preserved despite different $\mathcal{A}^{h}$ ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1) The proposed method is based on Gaussian noise models, which is acknowledged by the author. In its current form, it cannot, for example, efficiently handle image data without pre-processing steps.
2) The link between SALT and SLDS is "only" demonstrated for non switching dynamics ($h=1$ ).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer 4hZ1
We thank the reviewer for the time to review our submission; for their detailed and insightful review; and for highlighting the contributions and the clarity of our submission.
## R.E. Weaknesses:
**1. Section title:**
>Title 3.3 is confusing. I agree that the theoretical connection discovered by the author is significant, but it does not concern SALT and sLDS. The link is between LDS and Low Rank Tensor AR models.
We have removed switching from the section title.
**2. Graphical model:**
>I think the graphical of Figure 1 is wrong. Why isn't there any arrow between z_t and z_t+1?
Great catch – we also noticed this the day after submission… Yes, there should be a link between z’s. We have corrected the figure.
## R.E. Questions:
**1. Concatenating into 4-way tensor:**
>Would it be possible to concatenate the 3-way autoregressive tensors A^h in a 4-way low rank tensor A of size N x N x L x H? This might yield an even more compact description of the data.
Concatenating across states is certainly possible. However, it isn’t immediately clear how to “mine” additional understanding from the representation.
**2. Highlight novelty of Proposition 1:**
>I understood Proposition 1 was a novel contribution from the authors. If it's indeed the case, it's worth mentioning it (more) explicitly in the introduction.
Thank you, yes, Proposition 1 is novel. We have highlighted this more clearly in the introduction.
**3. Stability of factors:**
>How stable are the CP-SALT discovered factors from one initialization to the next ? It could be interesting to look at the factors directly to characterize dynamics, but I think the discovered dynamics could be roughly preserved despite different A^h?
For the NASCAR and Lorenz experiments, the factors are very stable (up to permutations and rotations) between runs. For the worm and mice experiments, they are slightly more variable. We have added a brief discussion of this, explicitly highlighting the equivalence classes and how these can be examined/identified.
## R.E. Limitations:
1. **Handling alternative modalities:** See General Response. SALT could be easily applied to handle, for instance, image embeddings generated by a deep autoencoder. This is a very interesting direction for future research!
2. **Links between SALT and SLDS around switches:** We attempted to analyze both SALT and SLDS around switches. However, the analysis rapidly became intractable. Practically speaking however, we only require that SALT models produce “similar” switching behavior. Further understanding the theoretical difference between the two methods is an interesting and challenging opportunity for follow-up work.
**Thank you again for your response.** If you have further questions, we are happy to answer them!
--- The SALT authors.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarification and additional analysis. I maintain my strong accept score of 8. | Summary: The paper introduces a new model for time-series called SALT (switching autoreg. low-rank tensor). The goal with SALT is to offer a "best of both worlds" alternative to AR-HMMs and switching linear dynamical systems (SLDS).
SALT's relative advantages are:
* enjoys closed-form parameter estimation (unlike SLDS)
* enjoys lower parameter count, meaning less likely to overfit for long-range models on small datasets (unlike AR-HMM)
| | AR-HMM | SALT | SLDS |
|------------------|-------------|-------------|-----------------------|
| param estimation | closed-form EM | closed-form EM | need MCMC/variational |
| parameter count | O(N^2) | O(N) | O(N) |
| hyperparameters | H, L | D, H, L | D, L |
where
D = rank / number of latent state dims,
H = num hidden states,
L = autoregressive lag
The key idea behind SALT is to essentially force the autoregressive coefficient matrix of an AR-HMM to take a specific factorization structure (See Eq 6-7).
In Sec 3.2, a closed-form EM estimation algorithm is derived.
In Sec 3.3, Prop 1 presents a formal error analysis of how well a SALT model can approximate a *stable* LDS
Experiments in Sec 4 cover
* 5.1: analysis of Prop 1 ideas on a simulated dataset
* 5.2: evidence that SALT might be as expressive as SLDS on synthetic data
* 5.3/5.4: real experiments on mouse behavior and worm neurons
Strengths:
+ elegant proposal for factorizing coefficients of AR-HMM
+ parameter estimation via EM allows all steps to be closed-form
+ thoughtful experiments on synthetic and real data
Weaknesses:
### Why prefer SALT over SLDS? What real computational advantage?
In Sec 5.4, on a big real dataset it is argued that SALT "can perform as well as" SLDS. While this is all fine, is there a strong advantage that makes a practioner studying this data prefer SALT?
Seems like ultimately, parameter count SALT is larger than SLDS (Table 1).
Closed-form parameter estimation is nice of course, but if the approximate methods for SLDS worked well enough on this dataset, when should a practitioner think that SALT is preferrable, if ever?
### Missing comparison to AR-HMM with L2-regularized coefs
The paper's story suggests that AR-HMM overfitting is the major reason a new model is needed. However, a natural way to prevent overfitting (that preserves closed-form EM parameter estimation) is to provide L2-regularization of (some) AR coefficients. As best as I can tell, the results shown emphasize maximum likelihood estimation, without regularization.
While SALT is elegant, why should a practitioner invest in SALT and its more complex parameterization over a well-studied way to prevent overfitting?
### Hyperparameter settings need to be clarified
**Update after response**: The author response resolved this concern. Text below from original maintained for posterity.
In my understanding, SALT needs the user to select D, H, and L as hyperparameters. In the main paper, I found it difficult to understand which settings were used in several cases and how those were determined.
My concern is that their SALT model could be using substantially higher D/H/L values than alternatives. There needs to be a transparent way these hyperparameters are selected on each dataset.
I'd suggest in Fig 3 caption, reporting N/D/H/L for both racetrack and Lorenz data. Similarly with Fig 4 for mouse data and Fig 5 for worm neural activity.
### Missing some related work
**Update after response**: The author response resolved this concern. Text below from original maintained for posterity.
I'd suggest the authors consider a conceptual comparison to deep switching auto-regressive factorization (DSARF), published by Farnoosh et al at AAAI '21.
Like SALT, DSARF produces discrete segmentations and continuous latent trajectories for a time-series. SALT's factorization is conceptually distinct and has closed-form estimation, while DSARF requires non-conjugate inference via VI with Monte Carlo approximations of gradients. However, DSARF could be more flexible.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. How can we use Prop. 1 to decide which D value is optimal? Caption of Fig 2 implies we can use Prop. 1 to pick/rank D for SALT given a known LDS model. But doesn't Prop 1 as stated just provide a statement on error when SALT's D is set to one specific value D = n + 2m?
2. Seems like SALT has more hyperparameters (H, L, and D) to select than either AR-HMM or SLDS (each requires 2 of the 3). Are there good rules of thumb to avoid the expense of 3-dimensional grid search?
3. Can you report typical runtimes on the large datasets you study? Not just for training, but also hyperparameter selection as well.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Sec. 6 is missing entirely a proper paragraph discussing limitations of the work at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer 6MVR:
We thank the reviewer for taking the time to read our submissions and for their detailed and insightful feedback. We especially appreciated the description of our method as “elegant”!
The main theme of your review seems to be centred on the experimental utility of SALT – in terms of comparative performance compared and hyperparameter tuning. We provided some feedback in our General Response, and provide more detailed responses here.
## R.E. Weaknesses:
**1. Why prefer SALT over SLDS? What real computational advantage?**
>In Sec 5.4, on a big real dataset it is argued that SALT "can perform as well as" SLDS. While this is all fine, is there a strong advantage that makes a practitioner studying this data prefer SALT? [...]
A key advantage of SALT over SLDS is that exact inference (posterior over state and evaluation of the log marginal likelihood) in SALT models is straightforward; whereas SLDS can only evaluate a bound on the likelihood and a variational posterior over the latent state (or resort to MCMC). This makes model development more straightforward and model comparison more accurate. Inference in SALT models is notably faster than (even approximate) inference in SLDS models. Finally, the autoregressive dependency structure parameterised by SALT allows more direct insight into the system, compared to the somewhat abstract latent states recovered by SLDS. The factors can be individually inspected to garner further insight into the system.
*We stress, however, that SALT analysis is a good complement to conventional SLDS analysis, providing different, but equally useful, insights into the system.*
**2. Missing comparison to AR-HMM with L2-regularized coefs:**
>The paper's story suggests that AR-HMM overfitting is the major reason a new model is needed. [...]
We refer the reviewer to General Response B for general discussion of the practical utility of SALT. Beyond this, however, SALT recovers a low-dimensional continuous description of the data as well, defined as the vector value computed prior to multiplication with the output factors. The example traces of this low-dimensional variable are shown in Figure 3. We have highlighted this benefit over ARHMMs in the updated text.
As noted in General Response B, there is no consensus on the best or most principled way to regularize the tensor of an ARHMM, and our experiences in previous work was one of the motivations behind SALT! For fair comparisons across all models, we opted for the simplest approach and used L2 regularization. We have added further discussion of this point to the main text.
**3. Hyperparameter settings need to be clarified:**
>In my understanding, SALT needs the user to select D, H, and L as hyperparameters. [...]
The parameter complexities of SALT and SLDS were comparable across most of the models we considered. Note that for the C. elegans experiment, we used longer lags than was strictly necessary to model the data (to examine the longer time dependencies in the data). Models with commensurate numbers of parameters performed similarly. We will add further clarification of this to the text.
**4. Missing some related work:**
>I'd suggest the authors consider a conceptual comparison to DSARF [...]
Thank you for the very relevant reference! We have added qualitative comparison between SALT and DSARF to the revised manuscript. We have also compared SALT to DSARF on the apnea example used by Farnoosh et al [2021]. This application is of particular interest because of the clinical relevance of the discrete states (respiration vs no respiration, see additional PDF). We find that SALT performs comparably or slightly better than DSARF, achieving a normalized error of 22.57% vs DSARFs 23.86%, and producing qualitatively better discrete segmentations. We will include these results and further discussion in a camera-ready version.
## R.E. Questions:
**1. Selecting D using Prop. 1:**
>How can we use Prop. 1 to decide which D value is optimal? [...]
We explored using Proposition 1 to set D in Supplementary Figure 7, where we are able to predict the optimal rank for a SALT model when fitting to an LDS. Proposition 1 further states that the accuracy (and consequently the marginal likelihood) is only dependent on the number of lags above a certain rank, a result which we verify in the additional PDF. This result, in part, motivated the use of low-rank approximations.
**2. Hyperparameter fitting:**
>Seems like SALT has more hyperparameters (H, L, and D) to select than either AR-HMM or SLDS [...]
See General Response. In general, we find SALT models faster and more robust to fit than SLDS models, and so even a three-way grid search takes a comparable amount of “user” and computational effort to the two-way SLDS search (see additional PDF). In our experience, setting good hyperparameter ranges is straightforward with domain knowledge (eg. length of time dependencies). We also believe that the simpler inference in SALT constitutes a “gain”, whereas the inference pipeline used in SLDS is a hidden hyperparameter which dramatically impacts performance.
**3. Runtimes:**
>Can you report typical runtimes on the large datasets you study? [...]
See General Response. We have added typical runtimes for execution and training, and estimates for the time for hyperparameter tuning. SALT models retain the runtime cost of ARHMMs, but the resistance to overfitting of SLDS, overall making hyperparameter selection easier.
## R.E. Limitations:
**1. Discussion of limitations**: See General Response. We have added a more detailed limitations section touching on/ameliorating many of the points you raise.
**Thank you again for your response.** We ask if the reviewer would consider upgrading their score if we have successfully allayed your concerns. Of course, if you have further questions, we are happy to answer them!
--- The SALT authors.
---
Rebuttal Comment 1.1:
Title: Upgrading score to an accept
Comment: Thanks to the authors for the insightful response. I appreciate the artifacts in the PDF.
Fig 1: The new comparison to DSARF, which suggests at least that
--- based on RMSE of reconstructions on one dataset, SALT is competitive with DSARF
--- the binary segmentations indeed do look better for SALT to my eye from a quick review
Tab 1: The complete listing of hyperparameter settings for all experiments
--- This is helpful for reproducibility and ensuring fair comparisons across models. It relieves my original concerns about fairness.
Tab 2: Runtimes
--- This helps clarify that ARHMM runtime at training is surprisingly high and that SLDS is definitely more costly than SALT (esp at inference)
If the paper is revised suitably as promised (and includes these new artifacts), I am now persuaded to accept.
### Remaining questions
Q0: Regarding L2-regularization of coefs for the AR-HMMs, can you confirm that the experiments in the paper did use this regularization?
Your response seems to indicate this, but I can't find any clear description of your regularization strategy in the paper or the supplement.
Please confirm. If you did use regularization already, I suggest revision to make clear how you pick the regularization strength, etc.
Q1: Can you clarify what you mean by the following? Just that selecting between MCMC vs variational vs other methods is tough? Or that even within one choice, there are other "hidden" costs like learning rates?
> the inference pipeline used in SLDS is a hidden hyperparameter which dramatically impacts performance.
I think a clear description of the "hidden" costs of existing SLDS methods would help improve the paper
---
Reply to Comment 1.1.1:
Title: Clarifications
Comment: Excellent! We are super happy to have allayed some of your concerns and answered some questions, and that you chose to raise your score. In regard to your questions:
Q0: We did use L2 regularization in all ARHMM models in the paper. Thank you for the suggestion to include further details. In the updated paper, we will fully describe our investigation of regularization hyperparameters using a grid search.
Q1: Thank you for emphasising this, because it is an important plus-point for SALT that we under-discussed. SLDS uses non-exact inference, such as MCMC or variational methods, where each method entails tuning parameters like learning rates, number of samples, optimizers etc, just to do inference in a model. So to concretely answer your question, even after selecting an inference method, a user still needs to select among many tuning parameters. In contrast, SALT uses simple and exact inference. We will add explicit discussion of these hidden hyperparameters within SLDS inference approaches, why this is important, and how SALT is beneficial over SLDS approaches.
Thank you again to the reviewer for their positive feedback and engagement! | Summary: This paper proposes a new time-series model called Switching Autoregressive Low-rank Tensor Model (SALT) that combines the advantages of autoregressive hidden Markov models and switching linear dynamical systems while addressing their weaknesses. SALT allows for longer range dependencies without overfitting and has been proven to be effective in various prediction tasks. The paper also explains the low-rank factorization used in SALT parameterization and provides experimental results demonstrating the effectiveness of SALT in various real-world applications.
Strengths: 1. Novelty: The paper proposes a new time-series model, SALT, that combines the advantages of two existing models while addressing their weaknesses. This is a novel approach that has not been explored before.
2. Clarity: The paper is well-written and easy to understand, even for readers who are not experts in the field. The authors provide clear explanations of the model and its components, as well as the experimental setup and results.
3. Empirical evaluation: The paper provides empirical evidence of the effectiveness of SALT in various real-world applications, including neural and behavioral time series. The authors compare SALT to other commonly used time-series models and demonstrate its superior performance.
4. Reproducibility: The authors provide code and data to facilitate reproducibility of their experiments. This is important for other researchers who want to build on this work or apply SALT to their own datasets.
5. Generalizability: The authors demonstrate the effectiveness of SALT in various real-world applications, suggesting that it is a generalizable model that can be applied to a wide range of time-series data.
Weaknesses: 1. Lack of theoretical analysis: While the paper provides empirical evidence of the effectiveness of SALT, it does not provide a detailed theoretical analysis of the model. This may limit the understanding of the model's properties and limitations.
2. Limited comparison to state-of-the-art models: While the paper compares SALT to other commonly used time-series models, it does not compare it to the most recent state-of-the-art models. This may limit the understanding of how SALT compares to the best-performing models in the field.
3. Limited discussion of limitations: The paper does not provide a detailed discussion of the limitations of SALT. This may limit the understanding of the situations in which SALT may not be the best choice for modeling time-series data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How generalizable are the results of this paper to other types of time-series data and applications?
2. How robust are the results of this paper to variations in the experimental setup, such as the choice of evaluation metrics or hyperparameters?
3. How does the complexity of SALT compare to other time-series models, and what are the implications of this for its practical use?
4. How does the choice of low-rank tensor regression in SALT affect its performance, and how might other regression methods be more appropriate for certain types of data?
5. What are the potential ethical implications of using SALT or other time-series models for analyzing sensitive or personal data, and how can these be addressed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. Limited scope: The paper focuses on a specific type of time-series data and does not explore the applicability of SALT to other types of time-series data. This may limit the generalizability of the model to other domains.
2. Limited sample size: While the paper provides empirical evidence of the effectiveness of SALT, the sample size of the experiments is relatively small. This may limit the generalizability of the results to larger datasets.
3. Limited evaluation metrics: The paper primarily evaluates the effectiveness of SALT using prediction accuracy metrics. While these are important metrics, they may not capture all aspects of the model's performance, such as its ability to capture complex temporal dependencies.
4. Limited discussion of hyperparameters: The paper does not provide a detailed discussion of the hyperparameters used in the experiments. This may limit the understanding of how sensitive the model's performance is to the choice of hyperparameters.
5. Limited discussion of computational complexity: While the paper briefly mentions the computational complexity of SALT, it does not provide a detailed analysis of the model's computational requirements. This may limit the understanding of the practical implications of using SALT in real-world scenarios with large datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our submission and for their detailed and insightful feedback. The five strength points were particularly heartening. We now provide more detailed feedback.
## RE Weaknesses
**1. Theoretical analysis:**
>Lack of theoretical analysis: While the paper provides empirical evidence of the effectiveness of SALT, [...]
We make a novel and broadly applicable link between low-rank tensor regressions and linear dynamical systems in Proposition 1, linking the core components of SALT and SLDS models. Furthermore, SALT models are still fundamentally ARHMMs, allowing us to directly use existing results and methods to further understand and control the behavior of SALT models, eg, enforcing sparsity [Shah et al, 2015, NeurIPS] and non-negativity of the factors [Shashua and Hazan, 2005, ICML].
**2. Choice of baselines:**
>Limited comparison to state-of-the-art models: While the paper compares SALT [...]
See General Response B. We compare SALT to SLDS, ARHMMs and TVART (in the supplement). As raised by 6MVR, we also compared to DSARF on an example from the original paper, and find that SALT performs comparably if not better than this deep method.
We note that the relevant comparisons are models that yield a hybrid continuous-discrete representation. Many common methods are therefore not directly applicable. If the reviewer has suggestions of models with this structure, we are happy to include a comparison ahead of any camera ready version.
**3. Discussion of Limitations:**
>Limited discussion of limitations: The paper does not provide a detailed discussion of the limitations of SALT. [...]
See General Response (A). We have included thorough discussion of SALTs limitations.
## RE Questions
**1. Generalizability of SALT:**
>How generalizable are the results of this paper to other types of time-series data and applications?
We believe the analysis allowed by SALT is fully generalizable, particularly when the interdependencies between observations are insightful to model and a mixed discrete-continuous representation is sought. The ease of tuning and fitting SALT models make it an excellent option for exploratory data analysis. Beyond the two time-series applications in the paper, we added a third on Sleep Apnea data experiment from DSARF (as pointed to by 6MVR, see additional PDF), further demonstrating the generalizability of SALT.
**2. Robustness to hyperparameters and metrics:**
>How robust are the results of this paper to variations in the experimental setup, such as the choice of evaluation metrics or hyperparameters?
See General Response. SALT models are fairly robust to the choice of hyperparameters (within reason). There are also fewer hyperparameters, as there is no learning rate or decay schedule required, no tuning of variational/interactive/approximate inference algorithms, and EM guarantees convergence to a (local) maximum so there is no need for tuning early stopping.
With regard to metrics, we considered multiple different metrics for each model. We found that the use of confusion matrices, segmentations and marginal log-likelihoods (where applicable) yielded comparable results across methods and applications.
**3. Complexity of SALT:**
>How does the complexity of SALT compare to other time-series models, and what are the implications of this for its practical use?
The complexity of SALT lies between the ARHMM and the SLDS. The ARHMM is difficult — often intractable — to fit because of overparameterization. The SLDS is tricky to fit because it requires iterative variational inference methods. SALT admits fast computation of an exact marginal likelihood, allowing fair comparison **between** different model classes. In contrast, SLDS estimates a bound and is dependent on approximate or variational inference schemes.
SALT is also easier to implement and more extensible than SLDS. Each new SLDS model variant requires deriving and tuning a new variational inference scheme. New SALT models can be derived by simply updating the update equations (falling back to coordinate gradient descent). As in Weakness 1, SALT can therefore directly leverage the rich literature in tensor factorization methods.
**4. Role of rank in optimization:**
>How does the choice of low-rank tensor regression in SALT affect its performance [...]
The rank trades off expressivity with overfitting, acting as an implicit regularizer, in contrast to other models that require additional regularizers and hyperparameters to prevent overfitting. As noted in General Response B, there is no consensus on the best way to regularize an ARHMM, so SALT actually provides an intuitive regularization.
**5. Ethical concerns:**
>What are the potential ethical implications of using SALT [...]
We do not believe there are any new ethical implications from SALT.
## RE Limitations
1. **Type of evaluation**: See Questions 1.
2. **Scope of evaluation**: We tested SALT on a number of standard benchmarks common to this domain. We have also tested SALT on the apnea dataset used in DSARF (noted by Reviewer 6MVR). SALT outperforms DSARF in both reconstruction accuracy and segmentation quality.
3. **Temporal dependencies**: Quantifying temporal dependencies is tricky. We explored this in the C. elegans example, using a known neural function as a proxy for capturing time dependencies. We have added some clarification on this to the results section.
4. **Hyperparameters**: See General Response / Questions 2.
5. **Computational complexity**: See General Response and additional PDF. SALT models are broadly the same speed or faster to train and apply than ARHMMs and SLDSs.
**Thank you again for your response.** If we have successfully allayed your concerns, we ask if you would consider upgrading your score. Of course, if you have further questions, we are happy to answer them!
--- SALT authors. | Summary: The paper proposes Switching Autoregressive Low-Rank Tensor (SALT) models, a variant of an Autoregressive Hidden Markov Model (ARHMM) in which the model’s temporal dynamics are captured by a low-rank tensor approximation, thus combining the parameter efficiency of Switching Linear Dynamical Systems (SLDS) with the simple learning and inference techniques available for ARHMMs. The paper further shows that a SALT model can approximate a stable LDS model to a degree that depends only on the eigenspectrum of the LDS model and the order of the SALT model.
Strengths: - The paper makes two main contributions: (1) a parametrization of ARHMM dynamics with a low-rank Tucker decomposition; and (2) a theoretical analysis of the resulting model’s approximation error relative to a stable LDS. The latter is an interesting theoretical insight and includes a rigorous proof in Appendix B.
- The background section (Section 2) provides a useful recap of (switching) autoregressive models and (switching) linear dynamical systems that motivates the need for parameter-efficient architectures with tractable learning and inference algorithms.
- The experiments (Section 5) confirm that SALT models can learn (S)LDS dynamics and require less training data than their ARHMM parent due their parameter-efficient representation. The experiments on real-world data demonstrate that SALT models can learn semantically meaningful filters / state representations and outperform ARHMMs in terms of test log-likelihood.
Weaknesses: - As discussed in Section 4, structural decompositions of temporal dynamics, including low-rank approximations, have been explored in a variety of different contexts. This includes the representation of the autoregressive tensor as a Tucker decomposition and, while I appreciate the differences in the graphical structure compared to these models and the theoretical insights provided in Proposition 1, the technical contribution of this paper is not particularly strong.
- Although the core ideas of this paper are relatively simple, the confusing tensor notation makes the paper unnecessarily hard to follow. Uncommon operations like the $n$-mode tensor-matrix product or the $n$-mode tensor matricization must be properly defined, ideally including visual illustrations. Having to figure out which high-dimensional slices are being multiplied is a burden on the reader and diverts the focus from the underlying ideas. Where possible, I would recommend to express the model dynamics in summation notation instead of tensor operations; it would greatly improve readability.
- Ultimately, the representation of the autoregressive tensor with a Tucker decomposition is a structural assumption that modulates the spectrum between flexible models prone to overfitting and robust models prone to large bias. Since SALT models cannot be more expressive than a generic ARHMM, I view them primarily as a form of implicit regularization and, as such, would have liked to see a comparison with other regularization techniques (e.g., a Bayesian treatment with strong priors).
- The experimental validation compares the proposed method to two traditional time-series models (ARHMM and SLDS) but does not include any state-of-the-art baselines (e.g., based on deep variants of autoregressive or state-space models, Gaussian processes, Transformer architectures, normalizing flows, etc.). Even if the proposed method is more related to ARHMM or SLDS, it is expected that the evaluation takes other time-series architectures into account, including more competitive and more recent developments. The data is relatively simple as well, with two toy datasets consisting of (S)LDS simulations and a small number of real-world sequences (3 mice, 1 worm).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Minor comments:
- It would improve the accessibility of the paper if it included graphical models of (S)LDS and (S)VAR, similar to Figure 1, so that the advantages and disadvantages of the different model classes can be analyzed in terms of their (marginalized) conditional independence assumptions.
- I wish the presentation had pointed out the strong connections of the Tucker decomposition to (higher-order) PCA/SVD. In contrast to the Tucker decomposition, this is something most readers are familiar with and would have helped with the intuition of an orthogonal/unitary factor approximation.
Questions:
- Section 5.1: how were the random matrices sampled?
- Figure 2(A): why does a rank *higher* than 7 (Tucker-SALT) and 10 (CP-SALT) lead to *worse* approximations. Should the MSE not be strictly decreasing?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - The paper briefly mentions possible extensions but does not include an explicit discussion of limitations.
- The paper does not discuss any ethical concerns related to the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer BTJL
We thank the reviewer for taking the time to read our submission and for their detailed and insightful feedback. The strengths you outline really neatly encapsulated our objectives, and so that was great to hear! We will now provide some more detailed feedback beyond the General Response provided above.
## R.E. Weaknesses:
**1. Technical Contributions:**
>[...] While I appreciate the differences in the graphical structure compared to these models and the theoretical insights provided in Proposition 1, the technical contribution of this paper is not particularly strong.
While the technical contributions of our work may not be a paradigm shift, we are confident it will be useful to many practitioners who regularly use these types of models. SALT sits at the intersection of several widely-used methods, leveraging efficient components and capturing the benefits of each model to yield a model that offers a unique insight into the system. We also release our fast JAX code. We also highlight, as raised in General Response B, that inference in SLDSs is not a “solved” problem and often requires complex, hard-to-tune inference schemes – see recent publications such as Zoltowski et al [2020, ICML] and Berger et al [2022, NeurIPS]. Therefore, achieving a parameter complexity comparable with SLDS, while retaining fast and exact inference, and bringing additional benefits, is a valuable contribution to the field that will be of particular interest to the NeurIPS readership.
**2. Improving Notation**
>Although the core ideas of this paper are relatively simple, the confusing tensor notation makes the paper unnecessarily hard to follow. [...]
See General Response A. We have simplified and clarified the notation where possible, and added diagrammatic explanations to the supplement. Thank you for this feedback.
**3. SALT as implicit regularization:**
>Ultimately, the representation of the autoregressive tensor with a Tucker decomposition is a structural assumption that modulates the spectrum between flexible models prone to overfitting and robust models prone to large bias. [...]
All models trained used L2 regularization, including the ARHMM (we have added details to the supplement). Further to our response in General Response B, we note that there is no clear consensus on the correct way to regularize the parameters of vector autoregressive hidden Markov models – especially ones with low-rank factors. Therefore, SALT provides a natural and intuitive way to control the expressivity of the model family. Nonetheless, exploring more powerful Bayesian regularization or hierarchical components to SALT are important directions for future research.
**4. Empirical evaluation:**
>The experimental validation compares the proposed method to two traditional time-series models (ARHMM and SLDS) but does not include any state-of-the-art baselines (e.g., based on deep variants of autoregressive or state-space models, Gaussian processes, Transformer architectures, normalizing flows, etc.). [...]
See General Response B. We stress, our objective with this work was not to create a state-of-the-art regression model. Instead, SALT models provide an “interpolation” between the widely used SLDS and ARHMM. These models provide a very experimentally valuable hybrid continuous-discrete representation, and are used regularly by practitioners owing to their simplicity, efficiency and utility. As such, SALT is a valuable model that allows practitioners to retain the benefits of these models, while flexibly ameliorating their individual weaknesses.
The tasks we examined were taken from the literature, but, if the reviewer has suggestions for additional experiments then we are more than happy to add them ahead of a camera-ready version. Since receiving the reviews, we have compared SALT to DSARF (a “deep” switching model, as suggested by 6MVR) on the apnea example from that paper. SALT outperforms comparably than DSARF in terms of reconstruction accuracy and segmentation quality. These results are included in the additional PDF.
## Minor Comments:
1. **Visualization of model families**: We have added extra graphical models for each model to the supplement, and discussed the implications in the main text. Thank you for the suggestions.
2. **Links to SVD**: This is a great connection, we have added discussion of the links to SVD to the background, and also re-highlighted them when we discuss the factors later on.
## Questions:
1. **How were random matrices sampled**: Matrices were sampled as random rotational matrices.
2. **Why can increasing rank reduce performance**: Great observation – the answer is actually overfitting to the training data. Here we are plotting the error on held-out test data. The degradation also exists for the log-likelihood but is less visible. We have added clarification on this. We stress that this overfitting is caught by cross-validation.
## Limitations:
1. **Inclusion of limitations**: See General Response A. We have added discussion of limitations including model mismatch and missing observations.
2. **Inclusion of ethical concerns**: We have added clarification that we see no ethical concerns specifically related to SALT.
**Thank you again for your response.** If we have successfully allayed your concerns, we ask if you would consider upgrading your score. Of course, if you have further questions, we are happy to answer them!
--- The SALT authors.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for their insightful response. While it does address some of my concerns (e.g., notation, experiment details), I still feel the paper’s contribution and evaluation are relatively weak. At its core SALT is a specific type of ARHMM parametrization that allows for efficient learning/inference, with the rank controlling the model’s expressivity. I appreciate the new comparison to DSARF, but I do think the paper requires comparisons to other types of structural constraints (e.g., priors, loss penalties, hand-crafted dynamics).
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response. We are glad that our previous response was able to address some of your concerns.
We would like to re-emphasize that the paper's contribution is not simply a tensor-factorized ARHMM, but also the relationship between this model and widely used SLDS models. We make a novel and broadly applicable link between low-rank tensor regressions and linear dynamical systems in Proposition 1, linking the core components of SALT and SLDS models. Elucidating this relationship and demonstrating the advantages of fitting SALT models over SLDS is an important contribution to the field that will be of particular interest to the NeurIPS readership.
With regard to other types of structural constraints for vector autoregressive (hidden Markov) models, aside from L2 regularization on the parameters, there is no clear precedent for other types of regularization (See General Response B). Many regularizers and priors are difficult to work with, and are not widely used in practice. Moreover, we are not claiming that one couldn't get similar performance through ARHMMs with other types of structural constraints. We are instead exploring tensor factorization as a type of constraint, and in doing so, we not only identified an effective model class, but also bridged the gap between the widely used SLDS and ARHMM.
**Again, thank you for your feedback**. If there are any further questions, please do not hesitate to ask.
--- The SALT authors | Rebuttal 1:
Rebuttal: Thank you to all four reviewers for taking the time to read our submission and provide insightful and constructive feedback. We presented Switching Autoregressive Low-Rank Tensor (SALT) Models, which combine the benefits of ARHMMs and SLDS models (such as parameter efficiency, fast exact inference and interpretability), while ameliorating the drawbacks of each method individually. We compared SALT to similar models across a range of problems.
Here we respond to two themes that were touched on by multiple reviews: clarity and experimental utility. We then respond to individual reviews in more detail below each review.
## (A) Clarity
Reviewers 4hZ1 and sUaQ commended the clarity of our submission. However, there were comments on the complexity of the notation, the explanation of hyperparameter tuning, and the discussion of limitations.
**R.E. Notation:** We generally followed the tensor notation of Kolda and Bader [17]. However, we agree that tensor notation can sometimes be a lot to process. We have simplified the notation and used summation operators in several places. We have also added explicit definitions of tensor operations (with diagrams!).
**R.E. Hyperparameter tuning:** We conducted extensive hyperparameter tuning for all models using grid searches, and have included extensive details. Generally, SALT models are fairly robust to hyperparameter settings and across seeds. ARHMMs are very robust across seeds for good hyperparameters, but more are more susceptible to bad hyperparameter settings (resulting in very poor and more variable performance). The best SALT models use similar dimensions commensurate to the best SLDS models. We include exact dimensions in the additional PDF.
We also note that the number of hyperparameters in SALT is actually notably lower than many modern “deep” or kernel-based methods. Closed form updates alleviate the need for tuning learning rates; there are only three architectural dimensions that need tuning (compared to many layers widths, activations etc for deep methods); and exact inference removes any hyperparameters required by approximate or variational inference methodologies. Therefore, we assert that SALT actually has *fewer* parameters to tune than comparable models.
**R.E. Limitations:** We have added extensive discussion of the limitations of SALT, including:
- Model mismatch: SALT is a simpler model family compared to (eg.) DSARF. This simplicity accelerates optimization and reduces overfitting, but may increase the bias in predictions if the SALT model used is poorly tuned.
- Missing observations: SALT cannot natively handle missing observations. We considered using linear interpolation to bootstrap SALT, but believe that a more principled method for handling missing observations should be possible because of the nature of SALT models.
- Bayesian treatment: It is not currently possible to share information between time series with different dimensional observations. “Hierarchical SALT” is a possible extension to tackle this.
**R.E. Ethics:** We have added a short discussion outlining that there are no new ethical concerns as a result of SALT.
## (B) Experimental Utility
Several reviewers commented on the baselines we compared to. We note that a hybrid continuous-discrete representation of the data is experimentally valuable. For instance, MoSeq [Wiltschko et al, 2015] uses ARHMMs to segment mouse behavior into discrete labels and a continuous state. The requirement precludes many common models (transformers, GPs, RKNs etc) which lack a discrete component. Our objective was not to create a state-of-the-art regression model, but rather explore the space between two widely used models that offer this description.
We highlight that inference in SLDSs is not a “solved” problem and an active area of research. Numerous inference approaches exist, often requiring approximate or hard-to-tune inference schemes. Recent examples include Laplace-EM [Zoltowski et al, 2020, ICML] and linear programming [Berger et al, 2022, NeurIPS]. Developing new SLDS model variants therefore requires a deep understanding of the associated inference techniques. In contrast, new SALT models can be derived by simply updating the update equations (which can always fall back to coordinate gradient descent if a closed-form update cannot be derived). SALT can therefore tap easily into the rich existing literature on tensor regressions, such as enforcing sparsity [Shah et al, 2015, NeurIPS] or non-negativity [Shashua and Hazan, 2005, ICML], in a way that SLDS cannot.
Similarly, to our knowledge, there is no clear consensus on the best way to regularize vector autoregressive (hidden Markov) models; several possibilities exist, see, eg, Melnyk and Banerjee [2016, ICML] or Ni and Sun [2005, ASA]. Many regularizers and priors are difficult to work with, and are not widely used in practice. Beyond this, even well-regularized ARHMMs do not natively capture interpretable low-dimensional dynamics, as both SALT and SLDS models do (see Figure 3). These low-dimensional dynamics are experimentally as useful as the discrete segmentation.
As also highlighted above, SALT has fewer hyperparameters than other methods, when methods are considered in their entirety. Therefore, SALT models combine the benefits and ameliorate the weaknesses of both ARHMMs and SLDSs, and provide an accessible, extensible, interpretable and performant alternative that can be easily tuned and deployed by practitioners.
Finally, we have also compared SALT to DSARF (suggested by 6MVR) on the apnea task included in Farnoosh et al [2021] and have added these results and discussion. SALT matches or exceeds the performance of DSARF in terms of the NRMSE% on held-out test data and the quality of the segmentation (see attached PDF).
**Again, thank you for taking the time to review our paper**. If there are any further questions, please do not hesitate to ask!
--- The SALT authors
Pdf: /pdf/dc1fbe22e1430e7af43c2d96308c54a676fd7ada.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning | Accept (poster) | Summary: This paper proposed Multi-Task Diffusion Model, a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis in multitask offline settings. The performance of the proposed model on Meta-World and Maze2D benchmarks was shown.
Strengths: This paper is the first to achieve both effective planning and data synthesis for multi-task RL via diffusion model and GPT.
This paper is well written and easy to follow.
Weaknesses: # The difference between the proposed model and PromptDT is not well explained.
# Additional environment and ablation experiments may be more convincing.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: # PromptDT showed performance on MuJoCo. How is the performance of the proposed model on MuJoCo?
# This paper seemed to replace transformer in PromptDT with diffusion model. It’s better to explain the difference between the proposed model and PromptDT.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The related work was introduced in Section 4. It would be clearer to introduce related works before Section 2 Preliminaries.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and a positive assessment of our work! We are glad that you find our paper easy to follow, well-written, and the first to achieve both effective planning and data synthesis for multi-task RL. To address your concerns, we have added additional experiments on MuJoCo to validate the efficacy of our method. Our detailed response follows:
>1. The difference between the proposed model and PromptDT is not well explained.
Thank you for this good question! We would enrich the appendix with a dedicated section discussing their differences in the next version of our paper. The remarkable superiority of *MTDiff-p* over PromptDT emerges from our elegant incorporation of transformer architecture and trajectory prompt within the **diffusion model framework**, effectively modeling the multi-task trajectory distribution. PromptDT is built on Decision Transformer and it is trained in an autoregressive manner, which is limited to predicting actions step by step. However, *MTDiff-p* leverages the potency of sequence modeling, empowering it to adeptly perform trajectory generation. *MTDiff-p* has demonstrated SOTA performance in both multi-task decision-making and data synthesis empirical experiments, while PromptDT fails to contribute to data synthesis. Technically, *MTDiff-p* extends Decision Diffuser [1] into the multi-task scenario, utilizing classifier-free guidance for generative planning to yield high expected returns, which is also recognized by Reviewer EBNs.
[1] Anurag Ajay, et al. Is conditional generative modeling all you need for decision making? International Conference on Learning Representations, 2023.
>2. PromptDT showed performance on MuJoCo. How is the performance of the proposed model on MuJoCo?
We have added new experiments to compare our method with PromptDT on the MuJoCo benchmark. We trained our model on the publicly available PromptDT datasets, i.e., *Cheetah-vel* and *Ant-dir*. These chosen environments have been judiciously selected due to their inherent diversity of tasks, serving as a robust test to validate the capability of multi-task learning. We report the scores (mean and std for 3 seeds) as follows:
| Methods | Cheetah-vel | Ant-dir |
| -------- | ---------------- | ---------------- |
| MTDiff-p | $-29.09\pm 0.31$ | $602.17\pm 1.68$ |
| PromptDT | $-34.43\pm 2.33$ | $409.81\pm 9.69$ |
We observed that *MTDiff-p* outperforms PromptDT largely, demonstrating its high efficacy and potency.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the detailed response and additional performance experiments. I’ll keep my score “accept".
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We sincerely thank you for your recognition of our work! We really appreciate your effort to review our paper and your valuable comments! Thanks a lot.
---
Rebuttal 2:
Comment: Dear Reviewer,
The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. | Summary: The paper studies the use of diffusion models in offline multi-task reinforcement learning for planning and synthetic data generation. Both approaches use prompting to encode task-specific conditions for the generative model along with a transformer backbone. In the multitask setting, the approach outperforms prior SOTA baselines on MT50. In the synthetic data setting, the approach improves downstream TD3+BC performance.
Strengths: - Clear and well-written presentation of the method
- A thorough set of baselines and strong evaluation on the challenging MT50 benchmark for MTDIFF-p
- Elegant method of removing the requirement of one-hot task encoding by transition prompting
- Strong results showing positive multi-task transfer in both planning and synthetic data generation, the algorithm neatly extends single-task equivalents in both areas
Weaknesses: - Only custom settings with large amounts of data are considered, it would be useful to understand what the minimum data needed to be effective is. Additionally, it would be useful to evaluate on pre-existing offline datasets for a more representative comparison.
- In Figure 6, it would be useful to compare MTDIFF-s-single with the existing single task baseline, Synthetic Experience Replay. [1]
- More thorough analysis of data quality would be valuable rather than just downstream performance, e.g. error of the transitions. Furthermore, downstream RL performance is only computed with one RL algorithm.
- The two parts of the paper - multitask planning and synthetic data generation - are not necessarily connected. For example, the synthetic data (MTDIFF-s) is used for an entirely different RL algorithm, TD3+BC, and does not contribute to the performance of the planning algorithm (MTDIFF-p).
Minor:
- Line 242: Typo ‘fine-grind’
- Line 248: the author’s definition of ‘near-optimal’ is the same as D4RL ‘full-replay’ and ‘sub-optimal’ is the same as D4RL ‘medium-replay’. These descriptions may be more clear to offline RL practitioners
- Figure 4: choosing one sampled trajectory for each approach is likely cherry-picking, I would suggest rendering N samples
- Figure 5: missing standard deviation
- Figure 6: missing the baseline without synthetic data
- Line 15: Unclear what ‘high-quality’ and ‘low-quality’ mean as it seems MTDIFF-s models the original distribution
[1] Synthetic Experience Replay. Cong Lu, Philip J. Ball, Yee Whye Teh, Jack Parker-Holder.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - How would the prompting perform if the tasks were not readily discernible from the initial transitions? E.g. multi-task settings with sparse task-specific reward
- What is the minimum data required to make the algorithm work?
- What is the speed of planning and sampling transitions with the transformer model? How does this compare to related methods?
- How does MTDIFF-s-single compare to Synthetic Experience Replay? I.e. what is the benefit of modeling full trajectories instead of transitions?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Discussion of limitations hidden in Appendix, should be moved to the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >1. ... custom settings with large amounts of data ... minimum data needed to be effective is.
We downsample the "near-optimal dataset" to 0.1$\times$/ 0.2$\times$/ 0.3$\times$ the size via random selection. We observed that the performance of *MTDiff-p* decreases with dataset reduced, dropping at $36.06\%\pm2.36\%$ success rate for 0.1$\times$ size data, which can be caused by missing parts of optimal trajectories. To validate that, we re-train *MTDiff-p* on 0.1$\times$ size data containing *only* expert trajectories. In this case, *MTDiff-p* can even obtain $85.05\%\pm1.54\%$ success rate. It is reasonable that the most critical factor for goal-conditional planning is the optimality of the dataset instead of data quantity, which has also been verified in [1]. In our paper, we consider the more challenging setting, i.e., using data with low return accounting for majority of the dataset.
>2. evaluate on pre-existing offline datasets for a more representative comparison.
Our method aims to learn a diffusion model from multi-task offline data. However, pre-existing datasets like D4RL only contain single-task datasets in each domain. As a result, we collect multi-task datasets for evaluation. Meta-world is chosen since it contains 50 various and difficult tasks, and it is well-known for its efficiency. For reproductivity, we give the code in supplementary materials and the link to our dataset in Appendix. We remark that we added new experiments on Mujoco and *MTDiff-p* still achieves the SOTA performance, where the details can be seen in our response to Reviewer *gBBv*.
>3. In Figure 6, it would be useful to compare MTDIFF-s-single with the existing single task baseline.
We remark that SynthER [2] and MTDiff are almost concurrent work. Fig. 6 is used to support our hypothesis that *MTDiff-s* benefits from multi-task training instead of how *MTDiff-s-single* outperforms other methods. To compare *MTDiff-s-single* with SynthER, we borrowed the open-sourced code and trained SynthER on coffee-push, disassemble, and dial-turn tasks. We added the result to Fig. 6, and the revised vision is referred to Fig. 2 in **Global PDF**. We find that SynthER still underperforms *MTDiff-s*.
>4. More thorough analysis of data quality ... downstream RL performance is only computed with one RL algorithm.
We refer to **Global PDF** for the new experiments and explanations.
>5. ...multitask planning and synthetic data generation are not necessarily connected...
Thanks for your question. Indeed, MTDIFF-s can augment the multi-task dataset to benefit MTDIFF-p, while it can be biased in comparison. First, MTDIFF-p is a generative planning method and its performance is closely connected to the data optimality rather than the data coverage (see [1] and R1). MTDIFF-s is designed to improve the data coverage and it is more suitable to use offline RL for evaluation. Second, for a specific task, even MTDIFF-p's performance improves with data augmentation by MTDIFF-s, it can be caused by the augmented data of other tasks since MTDIFF-p can conduct implicit knowledge sharing among tasks. As a result, we conduct single-task augmentation and use offline RL for evaluation.
>6. Minor error
* We will fix the typo in the next version.
* We remark that definition of "near-optimal" and "sub-optimal" comes from paper [3], and we would mention replay and medium-replay based on your suggestion.
* We show more rendered cases for Maze2D in Appendix D, however, we added another example (see Fig. 4 in **Global PDF**) to demonstrate the effectiveness of our method.
* The corrective Fig. 5 is referred to Fig. 1 in **Global PDF**.
* We added the results of the baseline without synthetic data into Fig. 6, which is referred to Fig. 2 in **Global PDF**.
* A good quality means the generated data has (1) low dynamics errors to follow the true dynamics of the original task, and (2) extended data coverage to enlarge the dataset. Fig. 4 in appendix shows the synthetic data expands the coverage of the original data, thus boosting the performance of offline RL.
>6. How would the prompting perform if the tasks were not readily discernible from the initial transitions?
Thanks for the question! It is possible that the tasks cannot be identified from their initial transitions. Nevertheless, the prompt contains both the initial transitions and the task-specific label $Z$, where $Z=(s^*_i,a^*_i,\ldots,s^*\_{i+J-1},a^*\_{i+J-1})$ is the trajectory prompt sampled from an expert trajectory to provide task-identified information. $Z$ is injected into the model as a condition during both training and testing.
>7. ...speed of planning and sampling ...? How does this compare to related methods?
It is known that the sampling speed of diffusion model according to the diffusion process can be quite slow, which we have discussed in Appendix F. For a concrete example in MetaWorld, it takes on average 1.9s in wall-clock time to generate one action sequence for planning (hardware being a 3090 GPU). We can improve the inference speed by leveraging a recent sampler called DPM-solver [4] to decrease the diffusion steps required to 0.2$\times$ without any loss in performance, and using a larger batch size (leveraging the parallel computing power of GPUs) to evaluate multiple environments at once. Thus the evaluation run-time roughly matches the runtime of non-diffusion algorithms (diffusion step is 1). We will add an section to discuss the sampling speed of our method in Appendix in the next version.
>8. Discussion of limitations ... moved to main paper.
We would move it into the Conclusion section in the next version.
[1] Rethinking Goal-Conditioned Supervised Learning and Its Connection to Offline RL. ICLR 2022
[2] Synthetic Experience Replay. ICLRW 2023
[3] Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes. ICLR 2023
[4] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps. NeurIPS 2022
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you to the authors for the clarifying response and additional experiments. I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank You!
Comment: We sincerely thank you for your recognition and for increasing your score! We really appreciate it!
The constructive suggestions you gave during the rebuttal session are greatly helpful in improving the quality of our paper. Thanks to your time and hard work! | Summary: - The paper investigates the effectiveness of learning a diffusion model for modelling multi-task offline data. To do so, the paper introduces two variants of using a learned diffusion model: (a) by planning over a sequence of actions, (b) generating data, and using the generated data for offline policy optimisation/improvement.
- The paper compares the proposed method to various different baselines, and shows decent improvements over all the baselines.
- The paper shows the ability of the proposed model to generate useful data, by augmenting kow-quality datasets.
- The paper can be seen as an extension of Decision Diffuser for multi-task scenario.
Strengths: - The paper is very well written.
- The paper does a good job comparing to various strong baselines, as well as showing the usefulness of the generated synthetic data.
- The proposed method is shown to be effective planner for solving multi-task problem on Maze2D and Meta-World benchmarks.
Weaknesses: No such weakness as such.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - For Table H in appendix, it will be useful to report results for other baselines too.
- It will be useful to see how the performance changes as a result of varying the generated data. For example: MTDIFF-S synthesizes 2M transitions to expand the original 1M dataset. It will be helpful to report reports as to how the baselines perform by varying amount of data generated or augmented.
- It will also be helpful to see how the performance changes as the MTDIFF-S is trained on less amount of tasks (i.e., instead of 45 tasks it's trained on 5/10/15/20/30 tasks, as the working hypothesis is MTDIFF-S can perform implicit data sharing, so will be useful to verify this by decreasing the number of tasks (figure 6 only shows the comparison for 2 tasks).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and for a positive assessment of our work! We carefully address your concerns as follow:
>1. For Table H in appendix, it will be useful to report results for other baselines too.".
Thanks for this good suggestion! Considering the space limitation during this rebuttal period, we would report the results in the next version of our paper.
>2. It will be useful to see how the performance changes as a result of varying the generated data.
Thank you for this constructive suggestion! We re-run the experiments on coffee-push and disassemble tasks, expanding the original 1M dataset with 0.5M, 1M and 2M generated data for each task respectively. Then we continue to use TD3-BC as the downstream algorithm to measure the policy improvement. The result shows the performance increases along with more augmented data. Meanwhile, we find our method with only 0.5M generated data even outperforms RAD and S4RL which use 2M and 10M augmented data, respectively.
| Tasks | 2M Generated | 1M Generated | 0.5M Generated | Origin |
| ----------- | --------------- | :------------: | :------------: | --------------- |
| coffee-push | $74.67\pm 6.79$ | $70.66\pm7.78$ | $65.43\pm9.91$ | $28.60\pm14.55$ |
| disassemble | $69.00\pm 4.72$ | $63.6\pm4.62$ | $60.83\pm4.40$ | $60.20\pm16.29$ |
>3. It will also be helpful to see how the performance changes as the MTDIFF-s is trained on less amount of tasks.
Thank you for this constructive suggestion! We re-train *MTDiff-s* on 30/20/10 tasks respectively, and then measure the policy improvement on the coffee-push task across 3 seeds to verify our hypothesis that multi-task training enables implicit data sharing in *MTDiff-s*.
Our findings, as outlined below, provide compelling evidence in support of our hypothesis: *MTDiff-s* exhibits progressively superior data synthesis performance with increasing task diversity.
| Number of Taks | coffee-push |
| -------------- | --------------- |
| 45 | $74.67\pm 6.79$ |
| 30 | $68.33\pm9.00$ |
| 20 | $66.53\pm7.12$ |
| 10 | $65.06\pm 9.39$ |
| 1 | $63.41\pm8.99$ |
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for running extra experiments.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your acknowledgment of the extra experiments we conducted! We really appreciate your effort to review our paper and your recognition of our work. We are glad to see that our paper's quality has greatly improved with your valuable suggestions. If you have any additional suggestions or questions, please feel free to let us know. | Summary: This paper extends diffusion-based planners to multi-task settings by combining prompt learning. Specifically, a few segments from expert demonstrations are used as task prompts to distinguish different tasks and guide the diffusion model to generate task-specific trajectories. A classifier-free guidance approach is used to sample trajectories that yield high expected returns. Unlike previous diffusion-based planners, the authors utilize a Transformer as the denoising network. The experiments conducted demonstrate a marked improvement in performance.
Strengths: This paper is easy to follow. I feel quite enjoyable while reading this paper.
Weaknesses: - The experiment section needs further revision:
- Figure 5 x-label is not correct.
- What are the quantitative results on Maze2D unseen map
- Figure 6, what is the criterion for selecting these 2 tasks? This experiment compares the model training from scratch and the model with pre-training. The results are not that surprising since pre-training provides a better initialization in general.
- line 334: The subtitle is confusing. This paragraph is mainly about figure 6, not comparing other augmentation methods.
- In Table 2, can the authors add results with 2M random data expanded for S4RL and RAD? Just want to remove the effect of different data sizes.
- In Table 2, what are the original results? Is it the maximal return of the dataset?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough and detailed review and welcome suggestions for improvement. Here, we address your concerns as follows:
>1. The experiment section needs further revision: Figure 5 x-label is not correct.
We apologize for mistaking the x-label of Fig. 5. This figure quantifies the average scores obtained in 8 training maps of MTDiff-p and PromptDT, respectively. The final corrected version of Figure 5 is referred to Fig. 1 in the global PDF (with standard deviation added).
>2. What are the quantitative results on Maze2D unseen map?
Thank you for this question! We have added another unseen map for thorough comparison, which is rendered in Fig. 4 within the global PDF. Subsequently, we evaluate *MTDiff-p* and PromptDT on the 2 unseen maps for 100 episodes (150 steps for an episode), and report average scores as follows:
| Method | Scores |
| :--------: | :-----: |
| *MTDiff-p* | $23.37$ |
| PromptDT | $17.03$ |
>3. Figure 6, what is the criterion for selecting these 2 tasks? This experiment compares the model training from scratch and the model with pre-training. The results are not that surprising since pre-training provides a better initialization in general.
We would like to provide further clarification regarding Figure 6. It is essential to note that the figure does not illustrate the model acquired through pre-training; rather, it serves as a comparative representation of policies learned with distinct synthetic datasets generated by single-task and multi-task models.
Concretely, we train *MTDiff-s* on the multi-task dataset encompassing 45 tasks and *MTDiff-s-single* on the single-task dataset. The models, *MTDiff-s* and *MTDiff-s-single*, are then leveraged to generate task-specific data (i.e., for *coffee-push* and *disassemble* in Fig. 6) for data augmentation in offline RL learning. We compare the performance trained on these two kinds of augmented data, and find *MTDiff-s* shows a significant advantage compared to *MTDiff-s-single*. The reason would be that *MTDiff-s* can conduct implicit knowledge sharing [1] among tasks and transfer knowledge from other tasks to expand the data coverage of the target task, which is also appreciated by Reviewer 8zHR and UScg. As a result, both methods in Fig. 6 are trained from scratch with augmented data without pre-training.
The reason why we select these 2 tasks is that they are difficult and the policies perform relatively poorly learned with their original dataset. They are suitable for evaluating policy improvement after data augmentation to validate our hypothesis. We also added new experiments on another new task *dia-turn* and observed similar results, which is referred to Fig. 2 in the global PDF.
>3. The subtitle is confusing.
This paragraph highlights the superior performance of *MTDiff-s* in comparison to the baseline methods and offers a comprehensive analysis of the advantageous aspects derived from its multi-task training paradigm. To enhance clarity, we propose revising the subtitle to read: "How does *MTDiff-s* perform and benefit from multi-task training?"
>4. In Table 2, can the authors add results with 2M random data expanded for S4RL and RAD?
Thanks for the question. We would like to clarify that both S4RL and RAD have already used additional augmented data in their training process, so it is no need to add more samples to the dataset. Specifically, during each training step of S4RL or RAD, a batch of transitions are sampled and the algorithm augments the transitions to generate new ones. As a result, the size of newly added training samples is batch_size $\times$ training_steps (e.g., 400$\times$1M), which is much larger than the 2M augmented data in *MTDiff-s*.
>5. In Table 2, what are the original results? Is it the maximal return of the dataset?.
The original result is the success rate attained by the TD3-BC agent following training on the original 1 million (1M) dataset.
[1] Tianhe Yu, et al. Conservative data sharing for multi-task offline reinforcement learning. NeurIPS 2021.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank the authors for the clarification, especially in Figure 6. I'm happy to increase my score from 4 to 6.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We sincerely thank you for engaging with us and for increasing your score! We really appreciate your effort to review our paper and your recognition of our work!
The constructive suggestions you gave during the rebuttal session are greatly helpful in improving the quality of our paper. Thanks again for your time and hard work! | Rebuttal 1:
Rebuttal: ### General Response
We thank all of the reviewers for their time and insightful comments. Furthermore, we are very glad to find that reviewers generally recognized our key contributions and clear presentation of our paper:
#### Contributions:
* **Method:** "This paper is the first to achieve both effective planning and data synthesis for multi-task RL via diffusion model and GPT" [gBBv]. The proposed method is shown to be effective and positive, which extends previous single-task models to solving more complex multi-task problems [UScg, nEaR]. "Leveraging the power of prompting instead of one-hot task encoding is a cool and elegant idea" [8zHR, nEaR]. "The analysis of how MTDIFF benefits from multi-tasking in data synthesis is interesting" [8zHR].
* **Experiment:** The paper does a good job comparing various strong baselines, and shows decent improvements over all the baselines [UScg]. The experiments conducted demonstrate a marked improvement in performance [EBNs]. The paper shows a thorough set of baselines and the approach outperforms prior SOTA baselines [nEaR].
* **Presentation:** The paper is well written [UScg, gBBv]. The paper shows clear and well-written presentation of the method [nEaR]. The paper is easy to follow [EBNs, gBBv].
Meanwhile, we thank all the reviewers for their helpful and constructive feedback to improve the quality of our work. In addition to the pointwise responses below, we would carefully update our paper in the next version to incorporate the valuable suggestions of the reviewers:
* [**UScg**] We would add the experimental results about varying the generated data of *MTDiff-s* and varying the trained tasks of *MTDiff-s* (see the response to Reviewer UScg).
* [**UScg**] We would report performance for baselines on each single task in MT50-v2.
* [**nEaR**] We would add one more unseen Maze2D map where *MTDiff-p* succeeds while PromptDT fails (see Fig. 4 in global PDF).
* [**nEaR**] We would update Figure 6 to compare with more baselines (see Fig. 2 in global PDF).
* [**nEaR**] We would polish some of our expressions (e.g. definition) and add a section to discuss the sampling speed of our model (see the response to Reviewer nEaR).
* [**nEaR**] We would add experimental results about data quality analysis and IQL performance improvement (see Table 1 and Table 2 in the global PDF).
* [**EBNs**] We would revise the subtitle the line 334 (see the response to Reviewer EBNs).
* [**EBNs**] We would add quantitative results on Maze2D unseen map (see the response to Reviewer EBNs).
* [**8zHR, EBNs, nEaR**] We would update Figure 5 (see Fig. 1 in global PDF).
* [**gBBv**] We would add a section in the appendix to demonstrate the difference between our model and PromptDT, and add the experimental results on MuJoCo benchmark (see the response to Reviewer gBBv).
We hope to have addressed all the raised concerns and would be happy to respond to further questions and suggestions.
Pdf: /pdf/2d160fdd57d34f0c35416118af87642654218022.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The author propose a diffusion model MTDIFF for multi-task RL. The goal is to leverage diffusion process and transformer backbone to have a sota generative model for RL. The author demonstrates its effectiveness with generative planning on meta world and data augmentation.
Strengths: Combing transformer and diffusion together is a straightforward idea, but the paper excuted it very well. Also leveraging the power of prompting in generative planning is a cool idea. The analysis on how MTDIFF benefit from multi-tasking in data synthesis is interesting.
Weaknesses: The paper also seems unpolished. On table 1, row MTIQL is empty. Also the figure 5 should be "We evaluate these two methods on both seen maps and an unseen map. The average scores obtained in 8 training maps are referred to Figure 5", but I don't know which is PromptDT which is yours.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: With generative planning, can the authors show the visualization of how model get from random noise to optimal trajectory? Like a GIF would be nice.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments and a positive assessment of our work! We are glad that they found that our paper provides an interesting analysis and executes a straightforward idea very well. To address your concerns, we would polish our paper in the next version to make it clear, and our detailed response follows:
>1. On table 1, row MTIQL is empty.
In Table 1, the row of **MTCQL** remains empty as MTCQL almost fails on the Metaworld MT50-v2 benchmark. The corresponding analysis is in lines 307-309 in our paper. We hypothesize that the failure of MTCQL emanates from penalizing the O.O.D. actions across diverse tasks explicitly, thereby exacerbating the challenge of distribution shift within the ambit of multi-task training. Through our empirical experiments, we observed that MTCQL would firstly achieve around 10% success rate at the early training stage, and then drop down to almost 0% success rate.
>2. Also the figure 5 should be "We evaluate these two methods on both seen maps and an unseen map. The average scores obtained in 8 training maps are referred to Figure 5", but I don't know which is PromptDT which is yours.".
We are sorry that we mistook the x-label of Fig. 5. This figure quantifies the average scores obtained in 8 training maps of MTDiff-p and PromptDT, respectively. The final corrected version of Figure 5 is referred to Fig. 1 in the global PDF (with standard deviation added).
>3. With generative planning, can the authors show the visualization of how model get from random noise to optimal trajectory? Like a GIF would be nice.
Thank you for this good suggestion! We present a visual depiction of the model's performance improvement as the number of denoised steps increases on Maze2D, and the result can be seen in Fig. 3 in the global PDF.
We appreciate the reviewer’s feedback again which helped us to improve the quality of the paper. We also hope that our response have sufficiently addressed the reviewer’s concerns.
---
Rebuttal Comment 1.1:
Title: Thanks for rebuttal
Comment: I ve read the rebuttal and my score maintains
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We sincerely thank you for your recognition of our work! We really appreciate your effort to review our paper and your valuable comments! Thanks a lot.
---
Rebuttal 2:
Comment: Dear Reviewer,
The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. | null | null | null | null | null | null |
Sequential Memory with Temporal Predictive Coding | Accept (poster) | Summary: This paper proposes to use Predictive Coding Networks for temporal association of sequences.
Strengths: The paper is well-structured and easy to follow. The motivation is clear: a deep network model with biologically plausible learning algorithms for sequence learning.
Weaknesses: 1. This paper lacks of novelty and is in fact a trivial extension of [1]. In [1], the single-layer and two-layer predictive coding networks are proposed to associate the input and the output. In this paper, the input is replaced with frame x[t] in a sequence and the output is replaced with frame x[t+1].
2. Property 1 in Section 4 is a trivial result for linear regression. It should be noted that Property 1 is not rigorously presented. A condition that the data covariance matrix must be full-ranked should be imposed.
3. Lack of robust retrieval evaluation. In [1] and classic Hopfield networks, the model can recover the stored memories given noisy initialize state. Is the model in this paper robust to noise for sequence storage? Further experimental evaluation is needed.
4. Missing references [2,3]. [2] is the very first work for temporal sequence association. How do the authors compare their work to [3], which is also about predictive coding for sequences?
[1] Associative Memories via Predictive Coding. Tommaso Salvatori, Yuhang Song, Yujian Hong, Simon Frieder, Lei Sha, Zhenghua Xu, Rafal Bogacz, Thomas Lukasiewicz. arXiv, 2021.
[2] Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements. S.-I. Amari. IEEE Transactions on Computers, 1972.
[3] Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning. William Lotter, Gabriel Kreiman, David Cox. ICLR, 2017.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments on our paper. Specific responses are provided below and we kindly request that the reviewer consider reevaluating their score in light of our responses:
***
> “This paper lacks of novelty and is in fact a trivial extension of [1]. In [1], the single-layer and two-layer predictive coding networks are proposed to associate the input and the output. In this paper, the input is replaced with frame x[t] in a sequence and the output is replaced with frame x[t+1]”
We agree with the reviewer that this work extends [1] and this is also what we claimed in Related Works. However, the extension is not trivial as [1] only addresses static memories whereas our model performs sequential memory, arguably a more important type of memory, a point we made at the beginning of the paper. Notably, the seminal introduction of Asymmetric Hopfield Networks by Sompolinsky and Kanter (1986) can be viewed as an analogous extension of the original Hopfield Network towards sequential memory. Such advancements, aiming to address broader biological phenomena, should not be dismissed as trivial. Therefore, we wouldn’t consider our work as trivially extending [1].
The models in [1] did not directly associate input and output; they learned configurations representing inputs and recalled them when given a corrupted cue. Moreover, they didn't propose single-layer models. Our work differs by introducing temporal predictions for sequential inputs, unlike [1], where predictions were limited to interactions between layers. Thus, our model's innovation extends beyond replacing [1]'s input and output.
***
> “Property 1 in Section 4 is a trivial result for linear regression. It should be noted that Property 1 is not rigorously presented. A condition that the data covariance matrix must be full-ranked should be imposed.”
We thank the reviewer for pointing out that a full-rank covariance matrix condition is needed for Property 1 to hold. We will add it to the paper.
However, we respectfully disagree with the reviewer that Property 1 is a trivial result from linear regression. While it does stem from regressing x[t+1] to x[t] within our model, its significance is far from trivial for the following reasons:
1. Property 1 originates from a biologically plausible neural circuit illustrated in Figure 1 of our original paper. This depiction elucidates how the seemingly straightforward linear regression can be executed in a biologically plausible manner within the hippocampus, with local computations and Hebbian plasticity. Consequently, the result holds substantial importance from a biological context.
2. The expression of Eq 15 in Property 1 is in a form akin to Universal Hopfield Network (Millidge et al., 2022), which establishes a vital connection between two influential computational models: predictive coding and Hopfield Network. This linkage between independently proposed models offers a significant contribution to the realm of computational modeling of neural systems.
3. Property 1 offers an insightful interpretation of linear regression coefficients. By decomposing $(X^TX)^{-1}$, it elucidates how linear regression employs a whitened similarity function to compare the input with training data, leveraging this similarity score to weigh target variables and compute the weighted sum of all targets. This interpretation has a noteworthy contribution to the theoretical comprehension of linear regression.
***
> “Lack of robust retrieval evaluation. In [1] and classic Hopfield networks, the model can recover the stored memories given noisy initialize state. Is the model in this paper robust to noise for sequence storage?”
We acknowledge the reviewer's interest in noisy query retrieval and have incorporated experimental outcomes in the attached PDF file in global responses (Fig2). We conducted experiments similar to Fig3A in our original paper, introducing Gaussian noise with varying standard deviation (std) to the recall query. Fig2A illustrates single-layer tPC's performance in retrieving from noisy queries with std 0.1 (top row) and 1.0 (bottom row). As noise levels increase, recall quality diminishes, yet tPC's recall inference progressively refines and denoises retrieved images. In Fig2B, the same evaluation is presented for MCAHN, showcasing consistent retrieval of clear and sharp images. This clarity stems from MCAHN's softmax separation function (Eq 5), absent in tPC (Eq 15). Fig2C provides a quantitative assessment of noise impact on recall MSE for memorized sequence lengths of 16 and 32. Generally, both tPC and MCAHN experience increased recall MSE with higher noise levels. However, while MCAHN performs better than tPC at sequence length 16, the reverse holds true at length 32. This observation, combined with Fig3A in our original paper, affirms sequence length's predominant role in recall MSE under these noise levels. It's noteworthy that MCAHN's recall MSE variance is substantial due to the softmax separation function's potential for convergence to erroneous sequence entries, as explained in our original paper.
***
> “Missing references [2,3]. [2] is the very first work for temporal sequence association. How do the authors compare their work to [3], which is also about predictive coding for sequences”
We have added [2] to our literature review, in global responses. The “predictive coding” used in [3] is in a broader sense: they trained a deep network using backprop, end-to-end, whereas our model is trained using local and Hebbian learning rules derived from local prediction errors. However, we agree that it would be an interesting future direction to investigate whether tPC could achieve comparable results to [3]. We will add this to our Discussion and Related Works section.
***
> References
- Sompolinsky, Haim, and Ido Kanter. Physical review letters. 1986.
- Millidge, Beren, et al. International Conference on Machine Learning. 2022.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and retain my original rating. | Summary: The paper presents work on (relatively) biologically-plausible neural networks for remembering sequences of inputs, extending work on temporal predictive coding nets (a simple architecture of a layer of neurons for feature values and a layer for prediction error, with some interneurons) and asymmetric modern hopfield networks.
Analysis shows a direct link between temporal Predictive Coding networks and Asymmetric Hopfield Networks, with improved performance vs AHN for correlated patterns reflecting the implicit whitening process built into the tPC net.
The multilayer tPC net develops interesting representations of items and context while solving the problem of storing sequences with repeated items.
Strengths: There are nice clear theoretical results to explain the links between tPC and AHN, and new results showing good performance in challenging sequential memory tasks with complex repeated images.
The development of interesting higher order representations of sequential order in this simple-to-analyse system will be of interest to neuroscientists given the development of these representations in the mammalian brain.
Weaknesses: The sequential memory solutions considered here use changes to connection weights to store the sequence, these might be compared with deep networks that are able to reproduce a sequence having been pretrained on similar sequences, but not the one in question (the first example I think is Botvinick & Plaut, Psych Rev, 2006; now transformers).
It seems that the only advantage in performance, compared to the AHN, is the whitening process (which could easily be added to AHN), but perhaps this really reflects the fact that this is a biologically plausible implementation of AHN (which introduces whitening as a by product) - perhaps it could be compared to alternative ways of implementing AHN in a biologically plausible way (if they exist)?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Would an alternative be to explicitly make predictions or learn associations between items that are more than one place apart in the list?
How does this model compare to those explicitly involving a sequential contextual signal (e.g. the "temporal context model")?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above for potential limitations/comparisons that could be discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments on the additional references and possible extensions of our model. Specific responses are given below:
***
> “The sequential memory solutions considered here use changes to connection weights to store the sequence, these might be compared with deep networks that are able to reproduce a sequence having been pretrained on similar sequences, but not the one in question (the first example I think is Botvinick & Plaut, Psych Rev, 2006; now transformers)”
We examined the generalization capability of our 2-layer tPC in an experiment shown in Fig1 of the attached PDF file in global responses. Details of these experiments and results can be found in global response, “Description of Figure 1”.
However, we did not compare with Botvinick and Plaut (2006) directly as their sequential memory task is different from ours: their model was trained to memorize **and** recall sequences after a given recall cue. The testing phase is performed on unseen data, to examine whether the model can memorize and recall the new unseen sequences. On the other hand, our model is trained only to memorize, and recall is triggered by presenting to the model the first item in one of the training sequences. This difference in task prevents us from comparing with their model directly.
We value the reviewer's insight regarding Botvinick and Plaut (2006), which prompted us to incorporate a comparison with experimental data. The results are shown in Fig3 of the attached file to global responses and descriptions of the results can be found in the global response, “Description of Figure 3”.
***
> “perhaps it could be compared to alternative ways of implementing AHN in a biologically plausible way (if they exist)”
Currently, we are unaware of any other biologically plausible implementation of AHN that seamlessly incorporates the whitening operation. A possible approach is to combine the implementation in Chaudhry et al. (2023) with circuits performing statistical whitening such as in Duong et al. (2023). However, this requires further thinking and experiments and we defer it for future explorations.
***
> “Would an alternative be to explicitly make predictions or learn associations between items that are more than one place apart in the list? “
We agree this is an interesting direction to follow, as it may lead to a richer representation of contexts and time in the latent layers. However, this is beyond the scope of the current paper where we aim to explore the computational principle of the single-layer tPC and present preliminary results of the 2-layer tPC. We aim to explore this question in future works.
***
> “How does this model compare to those explicitly involving a sequential contextual signal (e.g. the "temporal context model")?”
Similar to the previous point, we aim to explore the contextual signal in the 2-layer tPC in future works. However, we appreciate the additional reference on Temporal Context Model and have added it to our literature review in the global responses.
***
> References:
- Duong, Lyndon R., et al. "Statistical whitening of neural populations with gain-modulating interneurons." arXiv preprint arXiv:2301.11955 (2023).
- Chaudhry, Hamza Tahir, et al. "Long Sequence Hopfield Memory." arXiv preprint arXiv:2306.04532 (2023).
- Matthew M Botvinick and David C Plaut. Short-term memory for serial order: a recurrent neural network model. Psychological review, 113(2):201, 2006.
---
Rebuttal Comment 1.1:
Comment: Thank you for the specific response to my comments, and the interesting global response. I think the new simulations strengthen the paper and reinforce my choice of a score of 7. I will increase my confidence in this score. | Summary: The authors propose a temporal predictive coding model that can memorize and recall sequences. The model performs better than a model based on asymmetric Hopfield networks. The authors provide a theoretical evaluation end explain the reasons for better performance. This work is inspired by neuroscience results and the authors argue that it establishes a possible computational mechanism underlying sequential memory in the brain.
Strengths: This paper proposes a new model for learning sequences using temporal predictive coding. The method is well explained, and results consist of several experiments showing better performance than when using an asymmetric Hopfield network. The authors provide a connection between temporal predictive coding and asymmetric Hopfield network - they identified how temporal predictive coding actually performs the same operation as asymmetric Hopfield network but with an implicit statistical whitening step during memory recall. They showed that when using multi-layer temporal predictive coding, the model develops latent representations of contextual information in sequential memories.
Weaknesses: The authors mentioned that this work helps establish a possible computational mechanism underlying sequential memory in the brain. In its current form, the paper lacks direct comparison with neural or behavioral data. Behavioral tasks such as free recall could be used to evaluate if the properties of the sequential memory resemble those in the brain.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: While the authors mentioned some of the related work, here are a few more papers that build neural models of memory for sequences.
Graves et al. 2014 Neural Turing machines
Voelker et al. 2019 Legendre memory units: Continuous-time representation in recurrent neural networks
Eliasmith et al. 2013 A large-scale model of the functioning brain
Howard et al. 2014 A unified mathematical framework for coding time, space, and sequences in the hippocampal region
Whittington et al. 2020 The Tolman-Eichenbaum machine: unifying space and relational memory through generalization in the hippocampal formation
The authors could comment on similarities/differences with respect to these approaches or perform a comparison.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors provided a sentence about future directions, but not about the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments on connecting our models to behavioral data and additional references. Specific responses are provided below and we kindly request that the reviewer consider reevaluating their score in light of our responses:
***
> “In its current form, the paper lacks direct comparison with neural or behavioral data.”
We have added a comparison to behavioral data and the results can be seen in Fig3 of the attached PDF file in global responses. In Fig3A, our 2-layer tPC is compared with Crannell and Parrish's (1957) study on sequence length's impact on serial recall of English letters/words. Using one-hot vectors to represent letters/words (a minimal example with a sequence 3 letters/words can be: [0,1,0], [1,0,0], [0,0,1]), we demonstrate accuracy as the proportion of perfectly recalled sequences across varying lengths. Our model aligns consistently with experimental data in Crannell and Parrish (1957) as well as the model by Botvinick and Plaut (2006), displaying a sigmoidal accuracy drop with increasing sequence length.
Fig3B introduces a qualitative comparison to Henson's (1998) experimental data, examining primacy/recency effects in serial recall. These effects involve higher accuracy in recalling early (primacy) and late (recency) entries in a sequence, with the recency effect slightly weaker than the primacy effect. Using one-hot vectors and fixed sequence length, we visualize recall frequency at different positions across simulated sequences (100 repetitions, multiple seeds for error bars). Each bar in Fig3B indicates the frequency of an entry at a particular position being recalled at each position. Our 2-layer tPC reproduces primacy/recency effects, albeit weaker than Henson (1998) and previous models (Botvinick and Plaut, 2006). Additionally, the model tends to recall neighboring entries upon errors, echoing Henson's data. We attribute the weaker effects to tPC's memory storage in weights, leading to overall improved performance across positions.
***
> “While the authors mentioned some of the related work, here are a few more papers that build neural models of memory for sequences.”
We have added a paragraph to the global responses, where we reviewed the papers that the reviewer pointed out. This paragraph will be added to the Related Works section in the camera-ready version of our paper.
***
> "The authors provided a sentence about future directions, but not about the limitations."
This work is limited to modeling sequential memories, whereas the model can possibly be extended to address other functionalities of the hippocampus, such as generalization, based on our initial experiments. We aim to explore this direction further in future works.
***
> References
- Henson, Richard NA. "Short-term memory for serial order: The start-end model." Cognitive psychology 36.2 (1998): 73-137.
- Crannell, C. W., and J. M. Parrish. "A comparison of immediate memory span for digits, letters, and words." The Journal of Psychology 44.2 (1957): 319-327.
- Matthew M Botvinick and David C Plaut. Short-term memory for serial order: a recurrent neural network model. Psychological review, 113(2):201, 2006.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I believe the additional experiments improved the manuscript and I adjusted my score accordingly. | Summary: The paper generalizes predictive coding as a method of training neural networks to Hopfield networks, giving a model of temporal predictive coding (tPC). tPC proves itself able to memorize discrete sequences at a level competitive with Asymmetric Hopfield Networks in experiments, and provides an intriguing hint as to the potential function of statistical whitening in the hippocampus.
Strengths: The authors provide a strong mathematical grounding and linkage to the predictive coding literature. Temporal predictive coding shows the interesting strength that as the correlation between features in a sequence increases, it does not appear to significantly lose much of its capacity -- no doubt due to the implicit statistical whitening. They perform an experimental evaluation against sequential versions of the Modern Hopfield Network and the Modern Continuous Hopfield Network, though not against non-Hopfield sequence learning or cognitive mapping models.
Weaknesses: The authors overclaim about biological/neural "memories" from the first sentence of the abstract. This becomes important because lossless sequence memorization is not what the hippocampus does, and if it did, it would be useless. This leaves the major unaddressed question being: can the paper's tPC model generalize to unseen but reasonably similar sequences? What features of the sequences can change without disrupting memorization?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Can the authors address any of the literature in the learning of cognitive maps in the hippocampus? Those often provide a connection to discrete event sequences and would give the authors a baseline to compare to beyond just the Hopfield networks they use.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Are the authors aiming their model and their claims at the computational or algorithmic Marr levels of analysis? Even if they're aiming at the algorithmic level, predictive coding has been proposed to approximate backpropagation in certain limits, and so they could have compared against backprop-trained memory models in which the backprop training was replaced with a sufficient predictive coding architecture. Likewise, there are plenty of cognitive mapping models at the computational model which are not analytically tractable, but which do admit sampling/Monte Carlo implementations that can attain neural plausibility. Why the restricted class of comparisons? Are the authors specifically proposing that the brain performs all necessary computations in closed form?
EDIT: The authors have fully addressed my concern about closed-form computation, which then places their tPC model within the class of things to which I requested comparison. This is very good!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments on generalization and connection of our model to cognitive maps. Specific responses are provided below and we kindly request that the reviewer consider reevaluating their score in light of our responses:
***
> “The authors overclaim about biological/neural "memories" from the first sentence of the abstract”
We thank the reviewer for pointing this out. Indeed, the survival of biological agents requires much more beyond sequential memory e.g., generalization. We will change our narrative on the significance of memory in the camera-ready version.
***
> “can the paper's tPC model generalize to unseen but reasonably similar sequences? What features of the sequences can change without disrupting memorization?”
We have added experiments to examine the generalization capability of our 2-layer tPC model, and the results are presented in Figure 1 of the PDF file in the global response section. In this experiment, we train the 2-layer tPC with sequences of rotating MNIST digits and vary the number of training sequences (“training size”). An example of these rotating MNIST digits can be seen in Fig 1A, row “ground truth”. The model's performance is assessed by its ability to rotate both seen MNIST digits and unseen EMNIST letters. For small training sizes (16), tPC can recall seen rotating digits but struggles with generalizing to unseen letters (Fig 1A and B). Increasing the training size to 1024 improves generalization, evident in clearer rotating sequences (Fig 1A and B). Panel C quantitatively confirms this trend: the generalization MSE on unseen EMNIST drops as MNIST training size increases, indicating the model learns the underlying dynamics. Interestingly, the recall MSEs for seen MNIST sequences also decrease due to the model extracting rotational dynamics from the larger training set, differing from the behavior observed in random MNIST sequences (Fig 3B in the original paper).
***
> “Can the authors address any of the literature in the learning of cognitive maps in the hippocampus? Those often provide a connection to discrete event sequences and would give the authors a baseline to compare to beyond just the Hopfield networks they use.”
Please see the discussion paragraph in the global response, where we discussed models of cognitive map and many other models related to sequential memory. This paragraph will be added to our Related Works section in the camera-ready version of this paper. In summary, two properties of our model have already shown tPC’s potential connections to cognitive map: 1) generalization 2) latent representation to disambiguate observations (Whittington et al., 2022). However, the focus of this paper is to investigate predictive coding in **sequential memory** whereas cognitive map models concentrate on the hippocampal formation’s role in flexible behavior based on abstracted knowledge. Therefore, we leave the direct comparison between tPC and cognitive map models (like TEM) to future explorations.
***
> “Are the authors aiming their model and their claims at the computational or algorithmic Marr levels of analysis? “
Algorithmic level
***
> “and so they could have compared against backprop-trained memory models in which the backprop training was replaced with a sufficient predictive coding architecture”
It is indeed an interesting direction to explore the relationship between backprop and predictive coding as learning rules in sequential memory, as most discussions on predictive coding as a plausible substitute to backprop focus on supervised learning. However, the focus of this paper is to investigate **whether predictive coding can support sequential memory in the brain and its computational principles** and we do not aim to compare with backprop. We will explore this problem in future works.
***
> “Why the restricted class of comparisons?”
We mainly compare with AHN because of the shared computational principles between tPC and AHN. This is interesting because it connects two classical computational models in neuroscience: predictive coding and Hopfield Nets. However, we do agree with the reviewer that comparing with Hopfield Nets only is restricted. Thus, we added a comparison to a recurrent model by Botvinick and Plaut (2006), shown in Fig3 of the attached PDF file in global responses. Their model has been compared with data collected from benchmark behavioral tasks and thus the comparison to it establishes a connection between tPC and behavioral data. Our result shows that tPC aligns well with the experimental data and can reproduce the effect of sequence length (Fig3A) and the primacy/recency effects (Fig3B).
***
> “Are the authors specifically proposing that the brain performs all necessary computations in closed form?”
No we are not. In fact the computations in tPC are not closed form: Property 1 in our original paper only holds when the temporal prediction is linear. When it is not linear, the retrieval cannot be expressed in closed form and has to be obtained iteratively (See Algorithm2 in SM). We stated this property here to establish a theoretical understanding of tPC and a connection to AHNs.
***
> References:
- James CR Whittington, Timothy H Muller, Shirley Mark, Guifen Chen, Caswell Barry, Neil
Burgess, and Timothy EJ Behrens. The tolman-eichenbaum machine: unifying space and
relational memory through generalization in the hippocampal formation. Cell, 183(5):1249–1263, 2020.
- James CR Whittington, David McCaffary, Jacob JW Bakermans, and Timothy EJ Behrens. How to build a cognitive map. Nature neuroscience, 25(10):1257–1272, 2022.
- Matthew M Botvinick and David C Plaut. Short-term memory for serial order: a recurrent neural network model. Psychological review, 113(2):201, 2006.
---
Rebuttal Comment 1.1:
Title: Impressive rebuttal, raising my score
Comment: To the authors,
Thank you for addressing my concerns about this paper, including the few that were a result of my own confusion. The global rebuttal and your specific response here indicates, in my view, a significant strengthening of the paper, and I will be raising my score. | Rebuttal 1:
Rebuttal: **We performed additional experiments as requested by the reviewers and presented the results in the attached PDF file. Since experiments in Fig 1 and 3 are related to the comments from multiple reviewers, we include descriptions of them here for reference:**
> Description of Figure 1:
In this experiment, we train the 2-layer tPC with sequences of rotating MNIST digits and vary the number of training sequences (“training size”). An example of these rotating MNIST digits can be seen in Fig 1A, row “ground truth”. The model's performance is assessed by its ability to rotate both seen MNIST digits and unseen EMNIST letters. For small training sizes (16), tPC can recall seen rotating digits but struggles with generalizing to unseen letters (Fig 1A and B). Increasing the training size to 1024 improves generalization, evident in clearer rotating sequences (Fig 1A and B). Panel C quantitatively confirms this trend: the generalization MSE on unseen EMNIST drops as MNIST training size increases, indicating the model learns the underlying dynamics. Interestingly, the recall MSEs for seen MNIST sequences also decrease due to the model extracting rotational dynamics from the larger training set, differing from the behavior observed in random MNIST sequences (Fig 3B in the original paper).
> Description of Figure 3:
In Fig3A, our 2-layer tPC is compared with Crannell and Parrish's (1957) study on sequence length's impact on serial recall of English letters/words. Using one-hot vectors to represent letters/words (a minimal example with a sequence 3 letters/words can be: [0,1,0], [1,0,0], [0,0,1]), we demonstrate accuracy as the proportion of perfectly recalled sequences across varying lengths. Our model aligns consistently with experimental data as well as the model by Botvinick and Plaut (2006), displaying a sigmoidal accuracy drop with increasing sequence length.
Fig3B introduces a qualitative comparison to Henson's (1998) experimental data, examining primacy/recency effects in serial recall. These effects involve higher accuracy in recalling early (primacy) and late (recency) entries in a sequence, with the recency effect slightly weaker than the primacy effect. Using one-hot vectors and fixed sequence length, we visualize recall frequency at different positions across simulated sequences (100 repetitions, multiple seeds for error bars). Each bar in Fig3B indicates the frequency of an entry at a particular position being recalled at each position. Our 2-layer tPC reproduces primacy/recency effects, albeit weaker than Henson (1998) and previous models (Botvinick and Plaut, 2006). Additionally, the model tends to recall neighboring entries upon errors, echoing Henson's data. We attribute the weaker effects to tPC's memory storage in weights, leading to overall improved performance across positions.
***
**Since all reviewers have pointed us to additional references, we added the following paragraphs of an additional discussion/literature review.**
Beyond Hopfield Networks, many other computational models have been proposed to study the mechanism underlying sequential memory. Theoretical properties of self-organizing networks in sequential memory were discussed as early as in [1]. In theoretical neuroscience, models by Jensen et al. [2] and Mehta et al. [3] suggested that the hippocampus performs sequential memory via neuron firing chains. Other models have suggested the role of contextual representation in sequential memory [4, 5], with contextual representations successfully reproducing the recency and contiguity effects in free recall [6]. Furthermore, Howard et al. [7] proposed that sequential memory is represented in the brain via approximating the inverse Laplacian transform of the current sensory input. However, these models were still at the conceptual level, lacking neural implementations of the computations. Recurrent networks with backpropagation and large spiking neural networks also demonstrate sequential memory [8, 9]. We compare our model with [8] to validate tPC’s alignment with behavior.
Our model is also closely related to the concept of cognitive map in the hippocampal formation
[10 –12], which is often discussed within the context of sequence learning to explain knowledge abstraction and generalization. In this work, we present two preliminary results related to cognitive maps, showing that our tPC model can 1) disambiguate aliased observation via latent representations and 2) generalize with simple sequential dynamics as a result of performing sequential memory [12]. However, as this work centers on memory, we leave cognitive maps for future explorations of tPC.
***
**References:**
[1] S.-I. Amari, IEEE Transactions on computers 100, 1197 (1972).
[2] O. Jensen, M. Idiart, and J. E. Lisman, Learning & Memory 3, 243 (1996).
[3] M. R. Mehta, M. C. Quirk, and M. A. Wilson, Neuron 25, 707 (2000).
[4] G. V. Wallenstein, M. E. Hasselmo, and H. Eichenbaum, Trends in neurosciences 21, 317
(1998).
[5] W. B. Levy, in Psychology of learning and motivation (Elsevier, 1989), vol. 23, pp. 243–305.
[6] M. W. Howard and M. J. Kahana, Journal of mathematical psychology 46, 269 (2002).
[7] M. W. Howard, C. J. MacDonald, Z. Tiganj, K. H. Shankar, Q. Du, M. E. Hasselmo, and H. Eichenbaum, Journal of Neuroscience 34, 4692 (2014).
[8] M. M. Botvinick and D. C. Plaut, Psychological review 113, 201 (2006).
[9] C. Eliasmith, T. C. Stewart, X. Choo, T. Bekolay, T. DeWolf, Y. Tang, and D. Rasmussen, science 338, 1202 (2012).
[10] J. Whittington, T. Muller, S. Mark, C. Barry, and T. Behrens, Advances in neural information processing systems 31 (2018).
[11] J. C. Whittington, T. H. Muller, S. Mark, G. Chen, C. Barry, N. Burgess, and T. E. Behrens, Cell 183, 1249 (2020).
[12] J. C. Whittington, D. McCaffary, J. J. Bakermans, and T. E. Behrens, Nature neuroscience 25, 1257 (2022).
Pdf: /pdf/37e5d71276c213915dc4479cd945cd84e3b3338f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Re-sampling Methods with Parameter Attribution for Image Super-resolution | Accept (poster) | Summary: The key idea behind the proposed Bi-Sampling 12 Parameter Attribution (BSPA) method is to reconcile the unbalanced inherent data bias, namely the heavy-tailed distribution are visually more important than smooth areas. Similar observation has been extensively observed in the literature of image restoration and super-resolution. The newly developed technique involves an inverse sampling strategy for enhancing the feature extraction ability of the model on the hard samples. Another technical contribution is to introduce integrated gradient for further improvement. Limited experimental results are reported to show marginal improvements over UDR (NeurIPS'2021).
Strengths: 1. The motivation behind BSPA is clear and well explained. Although similar observation exists in the literature of SR, sampling-based approach for unbalanced data has not been considered for SR before (previous studies on long-tailed data distribution are mostly for high-level vision tasks such as image recognition).
2. The proposed integrate gradient (IG) method is based on importance ranking and only updates the class of trivial parameters. This strategy is conceptually similar to the reweighting idea in the literature but easier to implement.
3. The reported experimental results have shown descent improvement over previous benchmark method such as UDN [27].
Weaknesses: 1. The proposed bi-sampling framework in Fig. 2 seems to be based on heuristics. When only LR observation is available, it is unclear from the figure how to adapt the second step toward the prioritization of edge and texture regions.
2. For inverse sampling, the procedure described in Eq. (4) lacks substantial justification. P(x)~1/x in Eq. (1) is difficult to follow (ref. [20] does not seem to directly suggest this formula).
3. The idea of integrated gradient is inherited from previous work FAIG [38]. What is the novel contribution here? It seems that the derivation of Eqs. (9)-(10) is a direct consequence of Eq. (4) in the original FAIG paper.
4. The reported experimental results do not support the claim of "significantly boost the performance" in the abstract. Both subjective and objective evaluation suggest that the performance of BSPA is comparable to that of other competing methods. With only one figure with small size images included, the superiority of BSPA is unconvincing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the point for Fig. 4 to make? It is not easy to see the difference between uniform and inverse SR in the current comparison (including both images and error maps).
2. Fig. 3 (a) looks confusing. Not much difference between random sampling and inverse sampling, what does this figure attempt to show?
3. The idea of "better distilling the remaining trivial parameters" is counter-intuitive. This could be an issue of literary presentation. If some parameters are not important (i.e., "trivial"), why do we strive to distill them better?
4. Why do you coin the term "bi-sampling"? Does uniform sampling and inverse sampling carry equal importance? If not, you might want to consider a more appropriate title for this work. Bi-sampling is different from bilateral filtering where the domain and range can be viewed as a dual representation. I am not sure if a similar duality holds for sampling procedure.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Have the authors considered other recent competing approaches to SR such as diffusion model-based? The baseline methods used in this paper do not seem to represent the current SOTA in SR.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer xmJz (denoted as R5)
*Q5-1: The proposed bi-sampling framework in Fig. 2 seems to be based on heuristics.*
A5-1: It is **unreasonable** to say the proposed framework is heuristics. We aim to propose a simple yet effective bi-sampling parameter attribution method for accurate image SR from the data re-sampling view. Similar to most of the existing SR works, our proposal is a **supervised** method and the LR-HR pairs are available. The bi-sampling strategy takes full advantage of the paired samples, where the inverse sampling captures more hard samples to train the model for performing well on edge and texture regions. The unsupervised case is out of our scope.
*Q5-2: For inverse sampling, the procedure described in Eq. (4) lacks substantial justification. P(x)~1/x in Eq. (1) is difficult to follow (ref. [20] does not seem to directly suggest this formula).*
A5-2: The reviewer might misunderstand this. 1) Eq.(4) presents the sampling possibility of each class for inverse sampling, which is inversely proportional to the number of samples in each class. 2) Eq.(1) denotes the sampling probability is the same for each sample in uniform sampling. More generally, it refers to random sampling, which is widely used in the existing SR model, e.g., ref[20]. However, due to the uneven content distribution within the image, the uniform sampling would cause the data unbalanced problem, which motivates us to design such a bi-sampling parameter attribution method.
*Q5-3: The idea of integrated gradient is inherited from previous work FAIG [38]. What is the novel contribution here? It seems that the derivation of Eqs. (9)-(10) is a direct consequence of Eq. (4) in the original FAIG paper.*
A5-3: The reviewer might misunderstand this. We do not claim integrated gradient is our novel contribution.
(1) The key idea behind the proposed BSPA is to reconcile the unbalanced inherent data bias, for enhancing the feature extraction ability of the model on the hard samples. Therefore, we need to find the significant parameters for different sampling ways to obtain a compact representation space.
(2) To achieve this, we first formulate the parameter importance with Cauchy mean value theorem as Eq. (8). The integrated gradient merely serves as a tool for the attribution analysis by splitting the weight changes into a continuous path, which is usually used in numerical analysis and other level tasks.
*Q5-4: The reported experimental results do not support the claim of "significantly boost the performance" in the abstract. Both subjective and objective evaluation suggest that the performance of BSPA is comparable to that of other competing methods. With only one figure with small size images included, the superiority of BSPA is unconvincing.*
A5-4: The reviewer might misunderstand this. 1) In the abstract, we claim that our proposal can significantly boost the performance of **baseline methods** from the data re-sampling view. Besides, the quantitative and qualitative results also illustrate our proposal achieves superior or comparable performance against the compared methods. 2) For the subjective evaluation, we supplemented more visual results in **Figure 1 of the uploaded PDF**. It is observed that our proposal is more favorable and recovers more texture details than other compared methods. 3) For the objective evaluation, we also supplement more results on other datasets (please refer to **A1-1 of Reviewer zt8x**), which shows the effectiveness of our BSPA. We hope R5 can notice these results and analyses.
*Q5-5: What is the point for Fig. 4 to make? It is not easy to see the difference between uniform and inverse SR in the current comparison (including both images and error maps).*
A5-5: Fig.4 visualizes the SR results and their error maps with the GT of uniform sampling (uniform SR) and inverse sampling (inverse SR). It shows that inverse SR has a smaller error than the uniform SR on the texture region. Please zoom in for a better view, especially for the framed regions. Due to the limited space, we will provide more results in the new version.
*Q5-6: Fig. 3 (a) looks confusing. Not much difference between random sampling and inverse sampling, what does this figure attempt to show?*
A5-6: Fig.3 (a) illustrates that inverse sampling helps the feature extraction on hard texture regions, and performs better on tail hard classes against uniform sampling on the training dataset. The heavy-tailed distribution are visually more important than smooth areas, which is difficult for image SR. Since the scale of the ordinate is larger, you can zoom in for a better view.
*Q5-7: The idea of "better distilling the remaining trivial parameters" is counter-intuitive. This could be an issue of literary
presentation. If some parameters are not important (i.e., "trivial"), why do we strive to distill them better?*
A5-7: The reviewer might misunderstand this. In 'parameter refinement', we select a specific proportion of significant parameters according to the index sorting to keep unchanged, and only perform the gradient updating on the remaining trivial parameters for further refinement. The alternate training scheme is designed for distilling all parameters to obtain a more compact representation space. We expect the trivial parameters can also learn more effective information so we strive to distill them with other parameters fixed.
*Q5-8: Why do you coin the term "bi-sampling"? Does uniform sampling and inverse sampling carry equal importance? If not, you might want to consider a more appropriate title for this work. Bi-sampling is different from bilateral filtering where the domain and range can be viewed as a dual representation.*
A5-8: The bi-sampling refers to the two kinds of sampling ways, i.e., uniform sampling and inverse sampling. In our proposal, uniform sampling and inverse sampling carry equal importance.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
---
Rebuttal Comment 1.2:
Title: Visual quality comparison
Comment: I am mostly satisfied with the authors' rebuttal. Therefore, I have increased my rating by one level. However, I was hoping that the authors could provide a more detailed comparison in terms of subjective quality. For image SR study, PSNR differences matter less than visual quality improvement, in my biased opinion. This paper only contains a small figure (Fig. 5) for subjective quality comparison. If the proposed BSPA framework works so well, it should be fairly easy to find challenging examples from the distribution tail and report significant improvement in terms of visual quality.
---
Reply to Comment 1.2.1:
Title: Thanks Reviewer ehLM for approving our response
Comment: Dear Reviewer ehLM,
Thanks for agreeing that our response solves the concerns. Generally, the visual quality is measured by the **visual results** or the **perceptual metrics** like NIQE (a non-reference metric) and LPIPS (a reference-based metric that computes the perceptual similarity between the ground truth and the output SR image).
For the visual results, we have supplemented more examples in **Figure 1 of the uploaded PDF in the front of this webpage**. It is observed that our proposal is more favorable and recovers more texture details (hard regions) than other compared methods.
For the perceptual metrics, we provide the **NIQE/LPIPS** comparisons for RCAN (4x SR) in the following table. It shows that our method obtains superior NIQE and LPIPS values against the compared methods.
|Model | Set14 | B100 | Urban100 | DIV2K_valid |
|------ |:-----:|:-----:|:-----:|--------:|
| RCAN | 6.59/0.2932 | 6.73/0.3868 | 5.83/0.2508 | 6.03/0.2836 |
| SamplingAug | 6.33/0.2876 | 6.53/0.3809 | 5.60/0.2416 | 5.83/0.2784 |
| UDN | 6.61/0.2926 | 6.69/0.3850 | 5.76/0.2475 | 5.85/0.2813 |
| BSPA | 6.26/0.2844 | 6.46/0.3750 | 5.61/0.2314 | 5.79/0.2748 |
Thanks for your suggestion. We will provide more perceptual metric comparisons on other SR backbones and visual results in the supplementary file.
---
Rebuttal 2:
Comment: Dear Reviewer ehLM,
The authors have now submitted their rebuttal, addressing the concerns and comments you raised. We would greatly appreciate it if you could take a moment to review the authors' responses and provide your feedback. Your input will be invaluable in determining the final decision for the manuscript.
AC | Summary: The uniform sampling of the data, with flat regions occupying most of the training samples, can impair the accuracy of the reconstruction. Therefore, the authors enhance the model representation from the perspective of data sampling and propose a simple and effective Bi-Sampling Parameter Attribution (BSPA) method. Extensive experiments demonstrate that our method effectively promotes the performance of baseline models.
Strengths: 1. This paper is interesting and improves the performance of existing methods from the perspective of data sampling.
2. This paper clearly written and easy to understand.
Weaknesses: 1. The significance of the experimental results is not sufficient.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors do not compare their method with the latest papers, nor do they integrate the proposed method on the latest models. What is the motivation of the authors for choosing several comparative methods in the paper? For example, what is the effectiveness of this method on a transformer-based framework?
2. The authors mention that the SR data were divided into multiple groups, how was the parameter K determined here? How does this parameter affect the results?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I suggest that the authors could quantify the additional time that the proposed method brings to the training phase, so that the performance of the method can be more intuitively represented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer xmJz (denoted as R4)
*Q4-1: The significance of the experimental results is not sufficient.*
A4-1: We have already supplemented more experiments about integrating the proposed method on the latest model, the effect of parameter K, and the additional time costs. Thanks for asking for the details that could be important to further improve our method. We will include this in the final version.
*Q4-2: The authors do not compare their method with the latest papers, nor do they integrate the proposed method on the latest models. What is the motivation of the authors for choosing several comparative methods in the paper? For example, what is the effectiveness of this method on a transformer-based framework?*
A4-2: **The motivation of choosing several comparative methods**. To demonstrate the **scalability** and **generalization** of our method, we integrate it with several representative SR models with **different capacities and topologies**, i.e., FSRCNN (small), EDSR_baseline (medium), and RCAN (large), which are often used as the backbone for most SR models. It is observed from Table 4 of the manuscript that our BSPA has strong versatility and can be used in various SR models.
**The comparison with latest models**. 1) Most of the existing SR models mainly focus on the **delicate structure design** or **complex regularization constraints**, while few discuss the **distribution property of training data**. For fair comparison, we compare with the imbalanced data based SR methods on the same backbone model, SamplingAug (data re-sampling) and UDN (loss re-weighting). If there are other works related about this, please tell us. We are more than happy to include them for comparison.
**The effectiveness on a transformer-based framework**. Thanks for the valuable suggestion. We continue to integrate our method on SwinIR [ref4-1] for $4\times$ SR and report the PSNR (dB) in the following table. It is observed that our BSPA performs better overall. Therefore, our proposal can also be generalized to transformer-based models.
|model | Set5 | Set14 | B100 | Urban100 | Manga109 | DIV2K |
|------ |:-----:|:-----:|:-----:|:-----:|:-----:|--------:|
|SwinIR | 31.955 | 28.445 | 27.482 | 25.692 | 29.909 | 30.265 |
|SamplingAug | 32.054 | 28.544 | 27.515 | 25.900 | 30.177 | 30.314 |
|UDN | 32.025 | 28.544 | 27.514 | 25.873 | 30.079 | 30.332 |
|BSPA| 32.212 | 28.668 | 27.599 | 26.118 | 30.497 | 30.487 |
[ref4-1] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, Radu Timofte: SwinIR: Image Restoration Using Swin Transformer. ICCVW 2021.
*Q4-3: The authors mention that the SR data were divided into multiple groups, how was the parameter K determined here? How does this parameter affect the results?*
A4-3: We perform the ablation study on the number of groups in the following table. Here, we compare 5, 10, and 20. It shows that the SR performance is not sensitive to the number of classes. Because the data in different classes can be sampled according to their proportion by means of inverse sampling so as to remedy the unbalanced data bias. In our experiments, we fix the number of classes to 10.
|K | 5 | 10 | 20 |
|------ |:-----:|:-----:|--------:|
|Set14 | 28.358 | 28.37 | 28.380 |
|B100 | 27.408 | 27.40 | 27.417 |
|Urban100 | 25.531 | 25.53 | 25.544 |
|DIV2K | 30.128 | 30.13 | 30.143 |
*Q4-4: suggest that the authors could quantify the additional time that the proposed method brings to the training phase,
so that the performance of the method can be more intuitively represented.*
A4-4: The additional computation cost includes adding the inverse sampling and parameter attribution. 1) For the inverse sampling, our proposal needs to implement extra data processing for classifying all sub-images into different groups. It spends nearly 2 hours for the inverse sampling preprocessing, which can be optimized through multi-threaded implementation in our future work. 2) For the parameter attribution, it spends an extra half hour compared to the baseline model. Although our proposal brings extra training costs, it would not introduce any expense during the inference phase.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
---
Rebuttal Comment 1.2:
Title: I would raise my rating.
Comment: Thanks for authors response, all my concerns have been resolved.
---
Reply to Comment 1.2.1:
Comment: Thanks for agreeing that our response solves your concerns. | Summary: This work focuses on studying the unbalanced distribution of the SR training data. The authors propose a bi-sampling strategy with parameter attribution. The bi-sampling consists of uniform sampling and inverse sampling, which pay more attention to hard samples. Moreover, integrated gradient is introduced to measuring the parameter importance and stopping gradient updating of significant filters.
Strengths: + Introducing parameter attribution into SR is a novel idea. Calculating the parameter importance for obtaining a compact model sounds reasonable.
+ Extensive experiments are conducted to shows the advantages of the proposed method. The ablation study also helps to verify the effectiveness of the proposed techniques.
Weaknesses: - The motivation is not so novel. The data unbalance of SR training data has been widely mentioned in previous works. Many sampling strategies have also been proposed to solve this problem.
-According to the descriptions in ‘parameter refinement’, the most important parameters keep unchanged after a few epochs, which may cause underfitting. Authors should provide more theoretical support or conduct more experiments to verify its effectiveness.
-Some descriptions are not very clear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -What is wight-level IG and filter-level IG? Clear definitions are missing.
-In ablation study, the best PSNR is obtained when the unbalanced factor is equal to 10 and 200 gets the worst results. Why is the imbalance factor set to 200 in the experimental setting in line 167?
-What is the effect of different numbers of classes and intervals of alternate training?
-What is the extra computation cost?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the above comments
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer 4W2B (denoted as R3)
*Q3-1: The motivation is not so novel. The data unbalance of SR training data has been widely mentioned in previous works. Many sampling strategies have also been proposed to solve this problem.*
A3-1: **Motivation**. Most of the existing SR models only consider that different image regions have different reconstruction difficulties, and then mainly focus on the delicate structure design (like attention mechanism) or complex regularization constraints. Few discusses the sampling strategies of training data and models this as a long-tail problem.
**Idea**. The key idea behind the proposed Bi-Sampling Parameter Attribution method is to reconcile the unbalanced inherent data bias, namely the heavy-tailed distribution are visually more important than smooth areas. The newly developed technique with integrate gradient involves an inverse sampling strategy for enhancing the feature extraction ability of the model on the hard samples.
**Experiment**. For fair comparison, we compare with the imbalanced data based SR methods on the same backbone model, SamplingAug (data re-sampling) and UDN (loss re-weighting). If there are other works related about this, please tell us. We are more than happy to include them for comparison.
*Q3-2: According to the descriptions in ‘parameter refinement’, the most important parameters keep unchanged after a few epochs, which may cause underfitting. Authors should provide more theoretical support or conduct more experiments to verify its effectiveness.*
A3-2: The reviewer might misunderstand this. In ‘parameter refinement’, we select a specific proportion of significant parameters according to the index sorting to keep unchanged, and only perform the gradient updating on the remaining trivial parameters for further refinement. It would **not** cause underfitting since we adopt the alternate training scheme, which is designed for distilling all parameters to obtain a more compact representation space. We expect the trivial parameters can also learn more effective information so we strive to distill them with other parameters fixed.
*Q3-3: What is wight-level IG and filter-level IG? Clear definitions are missing.*
A3-3: The weight-level IG and filter-level IG denote the parameter granularity in Eq.(10). Weight-level refers to calculate the integrated gradient for each weight. Filter-level refers to calculate the integrated gradient for each filter. Thanks for asking for the details that could be important to further improve our method. We will include it in the new version.
*Q3-4: In ablation study, the best PSNR is obtained when the unbalanced factor is equal to 10 and 200 gets the worst results.
Why is the imbalance factor set to 200 in the experimental setting in line 167?*
A3-4: We are sorry for the writing mistake. It should be 10 in line 167.
*Q3-5: What is the effect of different numbers of classes and intervals of alternate training?*
A3-5: We perform the ablation study on the number of classes and intervals of alternate training for EDSR_baseline ($4\times$ SR) in the following tables.
**For the classes**, we compare 5, 10, and 20. It shows that the SR performance is not sensitive to the number of classes. Because the data in different classes can be sampled according to their proportion by means of inverse sampling so as to remedy the unbalanced data bias. In our experiments, we fix the number of classes to 10.
|Classes | 5 | 10 | 20 |
|------ |:-----:|:-----:|--------:|
|Set14 | 28.358 | 28.37 | 28.380 |
|B100 | 27.408 | 27.40 | 27.417 |
|Urban100 | 25.531 | 25.53 | 25.544 |
|DIV2K | 30.128 | 30.13 | 30.143 |
**For the intervals**, we compare 10, 50 and 100. It shows that the SR performance is better with smaller intervals. Because it is beneficial to obtain a compact representation during the integrated gradient process and more trivial parameters can be fully exploited. For a better tradeoff between performance and efficiency, we set the intervals to 50 in our experiments.
|Interval | 10 | 50 | 100 |
|------ |:-----:|:-----:|--------:|
|Set14 | 28.417 | 28.37 | 28.303 |
|B100 | 27.449 | 27.40 | 27.373 |
|Urban100 | 25.670 | 25.53 | 25.420 |
|DIV2K | 30.225 | 30.13 | 30.062 |
*Q3-6: What is the extra computation cost?*
A3-6: The additional computation cost includes adding the inverse sampling and parameter attribution. 1) For the inverse sampling, our proposal needs to implement extra data processing for classifying all sub-images into different groups. It spends nearly 2 hours for the inverse sampling preprocessing, which can be optimized through multi-threaded implementation in our future work. 2) For the parameter attribution, it spends an extra half hour compared to the baseline model. Although our proposal brings extra training costs, it would not introduce any expense during the inference phase.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
---
Rebuttal Comment 1.2:
Comment: After reading the rebuttal, some of my concerns have been solved. However, previous methods like SampleAug [1] have also considered the sampling strategies. Considering the limited novelty and experiments, this paper needs more insight and deeper experiments analysis. Thus, I tend to keep my original score as "borderline reject".
[1] Samplingaug: on the importance of patch sampling augmentation for single image super-resolution,” arXiv preprint arXiv:2111.15185, 2021.
---
Reply to Comment 1.2.1:
Title: We indeed compared with Samplingaug
Comment: Dear Reviewer 4W2B,
Thanks for agreeing that our response solves some of your concerns. We would like to clarify it as follows:
(1) We **have mentioned** that SamplingAug is a sampling-based data augmentation method for SR in Line 279-230. It only samples the p% most informative patches for SR training, while the rest is ignored. Unlike SamplingAug, we propose a bi-sampling paradigm to remedy the unbalanced data bias shown in Fig.1, including uniform sampling and inverse sampling, respectively. All image patches are effectively utilized for SR training accompanied by parameter attribution to help the model fully mine the image information.
(2) We **have compared with SamplingAug** in the experiments. Our method is superior to SamplingAug in both **objective metrics** (Table 4) and **visual results** (Fig.5 and Fig.1 of the uploaded pdf in this webpage).
Thanks for your response. If there are other works related to this, please tell us. We are more than happy to include them for discussion.
---
Rebuttal 2:
Comment: Dear Reviewer 4W2B,
The authors have now submitted their rebuttal, addressing the concerns and comments you raised. We would greatly appreciate it if you could take a moment to review the authors' responses and provide your feedback. Your input will be invaluable in determining the final decision for the manuscript.
AC | Summary: Observing the issue of uneven distribution of image contents, the author proposed to utilize inverse data sampling to resolve the inherent unbalanced data bias. In the proposed BSPA method, SR model is alternatively updated with uniformly and inversely sampled image data. For the latter, only part of the trivial parameters identified by parameter attribution are updated and the selective probability progressively increases during training.
Strengths: The data unbalanced problem this paper aiming to solve has been investigated in several prior literatures though from different perspectives. This work innovatively adopts the idea of resampling to address the issue, and conducts parameter attribution to balance the uniformly and inversely sampled data. The paper is organized well and reasonable experiments are provided to demonstrate the effectiveness of the proposed BSPA method.
Weaknesses: 1. A few notations in the paper are inconsistent and confusing. In Section 3.2, the superscript for “uniform sampling” is “rs”, for example, the uniformly sampled LR-HR patch pair is denoted as $(x_{lr}^{rs}, y_{hr}^{rs})$ in Algorithm 1. However, in section 3.3, the uniformly sampled patch pair is denoted as $(x_{lr}^{us}, y_{hr}^{us})$.
2. The experimental results of ablation studies presented in Tab. 1, 2, 3 only report the PSNR on the Set14 dataset which has a relatively small scale and the data distribution is not representative enough for assessing generalizability. More comprehensive and solid ablation analysis could be conducted by evaluating the models on larger testing dataset like the DIV2K validation set, BSD100, or Urban100.
3. There is a lack of ablation analysis on the number of classes $K$ which is simply set to be 10.
4. The model’s performance on FSRCNN model for x3 and x4 scale is worse than UDN method. It should be mentioned and discussed in the experiment section.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What is the additional computational time cost of adding inverse sampling and parameter attribution?
2. How is the scalability of the integrated gradient for parameter attribution for larger models with more parameters?
3. What is the shape of the distribution curve in Figure 1 if the MSE values are evaluated by the model trained with the proposed BSPA method? Will the distribution curve be more uniform or short-tailed?
4. It would be helpful to present more qualitative comparisons to demonstrate the visual effect.
5. How the data preprocessing is conducted (cropping with/without overlap?) and how many sub-images are produced?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations of the proposed method are discussed in this paper. It would be helpful if over-fitting issue is evidenced by any experiment result and the cost of extra data preprocessing is quantitatively measured.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer tDMi (denoted as R2)
*Q2-1: A few notations in the paper are inconsistent and confusing.*
A2-1: We are sorry about this. It should be $(x^{us},y^{us})$. We will revise it in the new version.
*Q2-2: The experimental results of ablation studies presented in Tab. 1, 2, 3 only report the PSNR on the Set14 dataset.*
A2-2: Please refer to **A1-1 of Reviewer zt8x (R1)**.
*Q2-3: There is a lack of ablation analysis on the number of classes which is simply set to be 10.*
A2-3: We provide the ablation analysis on the number of classes in the following table. Here, we compare 5, 10, and 20. It shows that the SR performance is not sensitive to the number of classes. Because the data in different classes can be sampled according to their proportion by means of inverse sampling so as to remedy the unbalanced data bias. In our experiments, we fix the number of classes to 10.
|K | 5 | 10 | 20 |
|------ |:-----:|:-----:|---------:|
|Set14| 28.358 | 28.37 | 28.380 |
|B100| 27.408 | 27.40 | 27.417 |
|Urban100| 25.531 | 25.53 | 25.544 |
|DIV2K| 30.128 | 30.13 | 30.143 |
*Q2-4: The model’s performance on FSRCNN model for x3 and x4 scale is worse than UDN method.*
A2-4: Thanks for your valuable suggestion. We would like to clarify it as follows:
(1) FSRCNN is an extreme lightweight model, which has only 24.5K parameters and several convolutional layers. The model representation ability is restricted. Therefore, it cannot fit the training well no matter the easy data or the hard data by the limited model capacity, especially for large scaling factors. Although the bi-sampling strategy increases the data diversity, there lacks sufficient model capacity to fit.
(2) UDN is a loss re-weighting method for solving the data imbalance problem. It provides a regularization constraint for shrinking the solution space.
Therefore, the model’s performance on FSRCNN model for x3 and x4 scale is worse than UDN method.
*Q2-5: What is the additional computational time cost of adding inverse sampling and parameter attribution?*
A2-5: The additional computation cost includes adding the inverse sampling and parameter attribution. 1) For the inverse sampling, our proposal needs to implement extra data processing for classifying all sub-images into different groups. It spends nearly 2 hours for the inverse sampling preprocessing, which can be optimized through multi-threaded implementation in our future work. 2) For the parameter attribution, it spends an extra half hour compared to the baseline model. Although our proposal brings extra training costs, it would not introduce any expense during the inference phase.
*Q2-6: How is the scalability of the integrated gradient for parameter attribution for larger models with more parameters?*
A2-6: To demonstrate the **scalability** and **generalization** of our method, we integrate it with several representative SR models with **different capacities and topologies**, i.e., FSRCNN (small), EDSR_baseline (medium), and RCAN (large), which are often used as the backbone for most SR models. It is observed from Table 4 of the manuscript that our BSPA has strong versatility and can be used in various SR models.
Here, we integrate our BSPA with the NLSN model [ref2-1], which has nearly 44M parameters and powerful feature extraction ability. It shows that our method still improves the performance by the bi-sampling strategy.
|model | Set14 | B100 | Urban100 | DIV2K|
|------ |:-----:|:-----:|:-----:|---------:|
|NLSN | 28.514 | 27.516 | 25.811 | 30.268|
|BSPA | 28.551 | 27.536 | 25.962 | 30.297|
[ref2-1] Yiqun Mei, Yuchen Fan, Yuqian Zhou: Image Super-Resolution With Non-Local Sparse Attention. CVPR 2021
*Q2-7: What is the shape of the distribution curve in Figure 1 if the MSE values are evaluated by the model trained with the proposed BSPA method?*
A2-7: We present the distribution curve of Figure 1 in the **uploaded PDF**. Note that we adopt EDSR_baseline as the backbone and integrate the proposed BSPA on it. As Figure 2 in the PDF shows, it is observed that the histogram distribution of BSPA becomes **short-tailed**. The reason is that the SR performance on the tail data is improved, while the performance on the head data is kept. Therefore, it demonstrates that our BSPA is effective in obtaining a more compact representation.
*Q2-8: It would be helpful to present more qualitative comparisons to demonstrate the visual effect.*
A2-8: We have supplemented more visual results in **Figure 1 of the uploaded PDF**. It is observed that our proposal is more favorable and recovers more texture details than other compared methods.
*Q2-9: How the data preprocessing is conducted?*
A2-9: We follow ClassSR [ref2-2] for data preprocessing, which crops the whole image into multiple sub-images with sliding and overlap. For 2x, 3x, 4x SR, it produces 499875, 502200, 499875 sub-images, respectively. More details can refer to ClassSR.
[ref2-2] Xiangtao Kong, Hengyuan Zhao, Yu Qiao, Chao Dong. ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic. CVPR 2021.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
---
Rebuttal Comment 1.2:
Comment: Thanks for the author's response, and I also read other reviewers' reviews, my concerns were well resolved.
---
Reply to Comment 1.2.1:
Title: Thanks Reviewer tDMi for approving our work
Comment: Thanks for agreeing that our response solves your concerns.
---
Rebuttal 2:
Comment: Dear Reviewer tDMi,
The authors have now submitted their rebuttal, addressing the concerns and comments you raised. We would greatly appreciate it if you could take a moment to review the authors' responses and provide your feedback. Your input will be invaluable in determining the final decision for the manuscript.
AC | Rebuttal 1:
Rebuttal: In this uploaded PDF, we mainly provide more visual results on benchmark datasets and the histogram distribution.
Pdf: /pdf/0f08835aec9f03b5912b6c6ae1555f4346f81717.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problem of data imbalance in single image super-resolution (SISR) training, where the majority of training samples contain flat regions while only a small percentage represents sharp regions with rich texture details. The authors propose a Bi-Sampling Parameter Attribution (BSPA) method to enhance model representation by explicitly increasing the sampling proportion of difficult patch pairs. This is achieved through a combination of uniform sampling and inverse sampling, which preserves the original data distribution while allocating a higher probability to patches from the tail data. Additionally, the authors propose a non-trivial solution, integrated gradient (IG), to identify important parameters and encourage their contribution by preventing gradient updating. Experimental results show that the proposed method improves the performance over baseline models. The main contributions of the paper are the identification of data imbalance in SISR training, the introduction of a bi-sampling paradigm to address the imbalance, the use of IG attribution to select important parameters, and the demonstrated effectiveness of the proposed method in enhancing SISR performance.
Strengths: - Data imbalance in single image super-resolution (SISR) has long drawn attention and has been discussed in the community. This paper, for the first time, investigates and introduces a non-trivial technique for this problem, contributing to the advancement of the field.
- The proposed Bi-Sampling Parameter Attribution (BSPA) method is an original approach to enhance model representation and tackle the limitations of uniform sampling commonly used in SISR methods.
- The paper provides a thorough analysis of the data distribution problem in SISR training and supports its claims with empirical evidence.
- Extensive experiments are conducted to validate the effectiveness of the proposed method, and the results are presented in a comprehensive and organized manner.
Weaknesses: **Weaknesses**
- In Table 1-3, the authors perform evaluations on Set14, which contains only 14 images. This may not help draw robust conclusions to more general scenarios under limited data size. The authors are suggested to conduct ablation study on larger dataset (*e.g.*, BSD100, Urban100, or DIV2k) to make more convincing conclusions.
- What is the benefit of splitting data patches into multiple groups, compared to directly measuring balanced weight within the continuous range?
- In inverse sampling preprocessing, what model is used to measure MSE of each patch? As $D^{rs}$ and $D^{is}$ are predefined and fixed in Algorithm 1, would it be beneficial to re-calculate and re-balance $D^{is}$ dynamically after updating $F$ with $\theta^{us}$ in each epoch?
- The qualitative results are limited. The authors should prepare a supplementary material with more visual results.
**Additional comments**
- Make consistent use of symbols. For example, $rs$ and $us$ in algorithm 1.
- Do not use the same notations for different quantities. For example, $N$, $N_k$ and $N_i$ in Eqn 2 and algorithm 1.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are well discussed in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer zt8x (denoted as R1)
*Q1-1: In Table 1-3, the authors perform evaluations on Set14, which contains only 14 images. This may not help draw robust conclusions to more general scenarios under limited data size. The authors are suggested to conduct ablation study on larger dataset (e.g., BSD100, Urban100, or DIV2k) to make more convincing conclusions.*
A1-1: Thanks for your suggestions. In this work, we mainly perform evaluations on Set14 as the previous works [ref1-1][ref1-2] do. Besides, we supplement more results on other datasets in the following tables. It is observed that our method is robust to more general scenarios.
Table 1-1: Ablation studies about the bi-sampling parameter attribution on benchmark datasets for $4\times$ SR.
| Model | Baseline (uniform sampling) | inverse sampling | w/o IG | WIG | FIG |
|------ |:-----:|:-----:|:-----:|:-----:|---------:|
| Set14 | 28.22 | 28.25 | 28.28 | 28.32 | 28.37 |
| B100 | 27.31 | 27.33 | 27.35 | 27.38 | 27.40 |
| Urban100 | 25.27 | 25.34 | 25.38 | 25.47 | 25.53 |
| DIV2K | 29.97 | 30.02 | 30.06 | 30.09 | 30.13 |
Table 1-2: The quantitative comparisons of different scaling factor in Eq.(11) on benchmark datasets for $4\times$ SR.
| $\beta$ | 0.1 | 0.5 | 0.8 | 1.0 |
|------ |:-----:|:-----:|:-----:|---------:|
|Set14| 28.37 | 28.35 | 28.34 | 28.32|
|B100| 27.40 | 27.37 | 27.35 | 27.31|
|Urban100| 25.53| 25.51 | 25.46 | 25.43|
|DIV2K | 30.13 | 30.10 | 30.07 | 30.05|
Table 1-3: The quantitative comparisons (PSNR) of different unbalanced factors on benchmark datasets for $4\times$ SR.
|Unbalanced factor | 10 | 50 | 100 | 200 |
|------ |:-----:|:-----:|:-----:|---------:|
|Set14 | 28.37 | 28.33 | 28.28 | 28.21 |
|B100 | 27.40 | 27.35 | 27.31 | 27.23 |
|Urban100 | 25.53 | 25.49 | 25.42 | 25.30 |
|DIV2K | 30.13 | 30.06 | 30.00 | 29.91 |
[ref1-1] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, Yun Fu: Image Super-Resolution Using Very Deep Residual Channel Attention Networks. ECCV 2018.
[ref1-2] Yiqun Mei, Yuchen Fan, Yuqian Zhou: Image Super-Resolution With Non-Local Sparse Attention. CVPR 2021
*Q1-2: What is the benefit of splitting data patches into multiple groups, compared to directly measuring balanced weight within the continuous range?*
A1-2: It **cannot** measure balanced weight within the continuous range. Inverse sampling aims to allocate a higher probability to capture the tail hard data with few samples. We adopt MSE to measure the reconstruction difficulties, which is a continuous value and each value corresponds to a sample. Considering the MSE value is continuous and divergently distributed, it needs to collect more samples within a certain range to calculate the proportion. Therefore, we split data patches into multiple groups for better calculating the sampling probability.
*Q1-3: In inverse sampling preprocessing, what model is used to measure MSE of each patch? As Drs and Dis are predefined and fixed in Algorithm 1, would it be beneficial to re-calculate and re-balance Dis dynamically after updating F withθus in each epoch?*
A1-3: (1) We adopted the **pretrained MSRResNet** to measure MSE of each patch and **mentioned** this in Line 50-57 of the manuscript.
(2) It would be **not** beneficial to re-calculate and re-balance $D^{is}$ dynamically after updating F with $\theta^{us}$ in each epoch. The inverse sampling preprocessing consists of data classification and sampling procedures.
- We adopt the pretrained SR model to measure MSE of each patch at the begining, which aims to obtain the recovery difficulty of each patch for better classification. However, the dynamic adjustment of $D^{is}$ would introduce extra noise for the poor model performance in the early training process.
- It is time-consuming for adjustment in each epoch. All the cropped patches should be evaluated to obtain their reconstruction difficulty and then classified into different groups according to MSE, which is measured in a global view. Besides, the number of sub-images is 499875, 502200, 499875 for $2\times$, $3\times$, and $4\times$ SR. It would bring in more time costs for dynamic re-calculation and re-balance in each epoch.
*Q1-4: The qualitative results are limited. The authors should prepare a supplementary material with more visual results.*
A1-4: We have supplemented more visual results in **Figure 1 of the uploaded PDF**. It is observed that our proposal is more favorable and recovers more texture details than other compared methods.
*Q1-5: Make consistent use of symbols. For example, rs and us in algorithm 1.*
A1-5: Thanks for your careful comments. We will revise it in the new version.
*Q1-6: Do not use the same notations for different quantities. For example, N, Nk and Ni in Eqn 2 and algorithm 1.*
A1-6: Thanks for your careful comments. We have revised $N$ and $N_i$ as $T$ and $T_i$ in the new version.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
---
Rebuttal Comment 1.2:
Comment: Most of the concerns have been addressed. Therefore, I would like to keep my initial rating. The authors are suggested to include those discussions and evaluations in the revised paper.
---
Reply to Comment 1.2.1:
Title: Thanks Reviewer zt8x for approving our work
Comment: Thanks for agreeing that our response solves most of the concerns. We will further improve this paper by adding these discussions and evaluations in the revised paper. | null | null | null | null | null | null |
Provable convergence guarantees for black-box variational inference | Accept (poster) | Summary: This paper offers the first convergence results for black-box VI, a widely used and popular framework for Bayesian problems. Under assumptions on the log model, $\log p$, and given a Gaussian variational family of distributions, convergence rates are established by utilizing recent advances in the field. The assumptions are motivated by giving practical examples of when they are satisfied.
Strengths: This work seems to lay the missing puzzle piece in a series of works towards establishing convergence rates for black-box VI. As such, it is clearly original, and it is significant as black-box VI is widely used in practice and is a popular research topic. Although the results build on seemingly strong assumptions on the log model, $\log p$, the significance of the results is emphasized by exemplifying broad classes of problem settings where the assumptions are satisfied, which deserves credit. I believe this paper will be of interest to the NeurIPS community.
The paper follows a clear line of argument and builds up to its results in a pedagogical manner, giving a convincing account of the existing, related works.
Weaknesses: **W1**: $C$ is a matrix (line 63), but in line 120 I read it as $C = 0$, by inspecting $\bar{w} = (\bar{m}, 0)$. It is not clear to me what this means. Are all elements in the covariance matrix zero (since $\Sigma = C C^T$)?
**W2**: In Sec. 2.1 I read it as, if $\log p$ is $M$-smooth then we can expect that the optimal covariance parameter for $q_w$ should not be too small. This makes sense to me. In fact they should be at least $1 / \sqrt{M}$ as shown in Domke's previous work. However in Theorem 2, where the log-target is assumed to be $M$-smooth, it says that $\bar{w}=(\bar{m}, 0)$ is the "maximum of $\log p$". I am confused about this. Doesn't $\bar{w}$ here imply that $\log p$ is not smooth, and that the optimal parameters $w^* = \bar{w}$, i.e. $C^* = 0 < 1 / \sqrt{M}$?
If I am helped clear up this confusion, I am willing to raise my score.
**W3:** I am not sure how the discussion about the score-type estimator in Eq. 4 contributes to the paper. The results in paper are based on the reparameterization trick (i.e. the path-type estimator in Eq. 5), no? What is the contribution of this discussion to the results in the paper?
**Minor issues**
In line 16, $h$ is called the entropy, but as it is defined in Eq. 1 and 15 it is the neg-entropy.
Sometimes (e.g. in the Central contributions box and on line 93) $q_w$ is referred to a family of distributions and sometimes (line 243) it is used a density. In VI the "family" is often meant as the set of distributions in which we want to find the optimal approximation, no? So can it really be a family and a distribution simultaneously? Perhaps it is more clear if $\mathcal{Q}_w$ is used to notate the family of Gaussian distributions parameterized by $w$?
Typo:
* line 35: objective is misspelled
* line 141: "that bar" seems to be a typo
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * In Sec. 2.1 it stated that $h$ is $M$-smooth over the set $\mathcal{W}_M$. Based on this, do you make any assumptions on $q_w$ other than it being Gaussian? I was expecting some new assumption given the title of the section, but maybe this is a "structural property"?
* Out of curiosity, could you expand on the constructed problem where the variance of the path-based estimator is "vastly" increased and the score-based estimator is not? Where is the noise added? To $\phi$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The assumptions are clearly highlighted and discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### C is a matrix but C=0
Yes, in this context, of writing something like w=(m,0) then 0 means a matrix of zeros. If φ(z)=-log p(z,x), then the gradient noise is bounded in terms of how far the parameters w=(m,C) are from representing a delta function centered at the MAP solution. (Note this is different from the optimal solution w*.)
### $\bar{w}$ is the maximum of $\log p$, doesn't this imply log p is not smooth?
It’s important to note here that $\bar{w}$ is not the optimal solution to the problem, which we denote by $w^*$. We do not say that $\bar{w}$ is the maximum of $\log p$ (which is not true) but rather that $\bar{w} = (\bar m, 0)$, where $\bar m$ is the maximum of $\log p$. Intuitively, we can see $\bar{w}$ as parameters that represent a delta function (because it has zero covariance) centered at the MAP solution. Note that this is just an intuition and we never make use of this interpretation. We propose to rewrite the Theorem to make this more clear.
### Discussion of score-type estimators
We appreciate this point. Arguably this discussion is unnecessary and distracting at the point it currently happens. Our reason for including it was that we wanted to mention that there are situations where score-function type estimators have lower variance than reparameterization, and so these arguably deserve further investigation. We believe this comment would make more sense in the discussion section.
### Any other assumptions needed for h to be M-smooth?
The assumption that h is M-smooth (over $W_M$) requires no other assumptions : it is indeed a structural property. We will clarify how this is discussed.
### Expand on the problem with variance of path-based is increased and score-based is not?
In terms of how the variance of the path-based estimator could be increased, imagine some problem with an unnormalized posterior φ(z) which is equal (up to a constant) to -log p(z,x). Suppose for simplicity that z is just a scalar. Now imagine changing to a new unnormalized posterior φ’(z) = φ(z) + √ε sin(z/ε) for some very small value ε. The difference of φ and φ’ is trivial (they only vary by a tiny amount). However, the derivative of the added term is cos(z/ε) / √ε. When ε is very small, this derivative will be huge. This will vastly increase the noise of a reparameterization estimator but have little effect on a score-based estimator. (In a sense, this is why the assumption that log p(z,x) is M-smooth is so important—it prohibits posteriors that do things like this!)
### errors / typos
Thank you for pointing these out. We will correct them. We agree that the way we used the terminology of “density” vs “family” is confusing. We will fix this by saying that q_w for a specific w is a Gaussian distribution, whereas the set {q_w | w ∈ W} (or Q_w) is the family.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the clarifications provided in the rebuttal.
Especially I appreciated the clarifications and the example regarding the score-type estimator and the variance of the path-based estimator. Now that I can more clearly see the point of the discussion, I think it makes a nice point why the assumption of an M-smooth log-density is important. I would be glad if it was somehow included in the discussion section.
I will increase my score to 7 if the issue in the proof raised by gXey is fixed. | Summary: This paper offers a convergence proof for the stochastic optimization problem inherent in full-rank Gaussian variational inference when the log-density of the target is concave. The primary challenge of the convergence proof lies in managing the non-smoothness present in the entropy term of Gaussian VI. This issue is addressed in the current work by considering proximal and projected stochastic gradient descent. To evaluate the validity of the imposed assumptions, case studies involving Bayesian (generalized) linear regression problems are provided.
Strengths: I appreciate the clarity of the writing of the present work; it clearly descibes the scope of the problem studied and states the main challenge of showing the convergence. The layout of each section forms a smooth flow of the proof strategies, in addition to a nice organization of the mathematical proofs, which makes the reading enjoyable.
The case study provided in appendix (sec 7.3) is helpful for reasoning imposed assaumptions.
The proximal operator introduced for optimizing the covariance matrix is novel to me. Although it's a standard techniques in optimization literature---splitting non-strongly smooth (but closed convex) part of the objective function using proximal operator, it's the first time I see in Gaussian VI literature. (This could of course be due to my lack of knowledge in this field) Additionally, the weighted telescope summation used when $-logp$ is only convex(rather than strongly convex) is very interesting, which is a setting that rarely considered in literature as far as I'm aware.
Even though the paper limits the scope to dense Gaussian VI problem, I think the present techniques can be applied to general location-scale variational families (as long as the variance quadratic bounds still holds).
Weaknesses: In my view, the principal limitation of this work is the somewhat restricted scope of the Variational Inference (VI) problem it examines. Specifically, the variational family under consideration is: (1.) Gaussian (albeit full-rank), and (2.) the target distribution is log-concave or strongly-log-concave. Additionally, (3.) data subsampling on $\log p$ is not taken into account. The proof techniques necessitated by this setting, in my opinion, are fairly standard within the stochastic optimization literature. While I acknowledge that any relaxation of condition (2) would likely preclude anything beyond convergence to some stationary point, and that the quadratic variance bound is heavily reliant on condition (1) (which is probably not a major concern for the broader community in regards to more complex VI families), offering a convergence result when data subsampling is implemented could greatly increase the impact of this work. I would be inclined to raise the score to 8 if data subsampling was considered, or if existing proof techniques could easily address it (though I doubt the current approach is effective in this case, please correct me if I'm mistaken).
Regarding originality, even though I concur that explicit results may be lacking for this specific problem setting, I don't think this is the **first** optimization theory guarantee for full-rank Gaussian VI. For instance, [Xu & Campbell, 2022] studies the convergence of full-rank Gaussian VI without the assumption of a log-concave target (even though they utilize posterior asymptotics to somewhat reduce the underlying optimization problem on a strongly-log-concave target, they also employ a special scaling operator (Eq(8)) to handle the non-Lip-smoothness of the entropy term $\log \text{det}$). I recommend that the authors integrate an optimization analysis of the scaled stochastic gradient descent proposed in [Xu & Campbell, 2022], and carefully compare the results to those provided in [Xu & Campbell, 2022] within a strongly convex setting at least (ignoring the data asymptotics and merely assuming that $\log p$ is strongly-log-concave should align their work with the current setting).
Finally, I encourage the authors to provide additional case studies, perhaps involving more Bayesian GLMs or even Bayesian sparse regression with a Horseshoe prior. From what I understand, most of these models yield a log-concave target, but the application of the variance quadratic bound is unclear to me. If negative results were to appear, for example, some models not having a quadratically bounded gradient noise, these examples would still significantly benefit the community.
At the end, I hope to stress that despite the aforementioned weakness/limitations, I think this is a really good work for this niche research direction.
Reference:
[Xu&Campbell, 2022]: The computational asymptotics of Gaussian variational inference and the Laplace approximation, Statistics and Computing
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Compare to [Xu&Campbell, 2022]: As I mentioned earlier, this is a very relavant past work on this topic. I wonder specifically, if we ignore all the asymptotic stuff and assumes $\log p$ is strongly-log-concave, does [Xu&Campbell, 2022] achieve similar convergence rate ($O(1/T)$) as the present paper?
2. I wonder if the proximal operator (line 264) has been noted in past VI literature? Or it is proposed by the author.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: See Weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review.
### Convergence result when data subsampling
Our proofs can indeed address subsampling with minor technical difficulty, namely bounding the variance of a slightly changed gradient estimator. Note that the main optimization results only depend on (1) the structural properties of the problem (which are unchanged with subsampling) and (2) the constants (a,b) bounding the gradient variance. Our gradient variance guarantees in Theorems 2 and 3 are all based on Theorem 1 from [20]. However, that reference also provides a more general version of Theorem 1 (Theorem 6 in [20]) which considers data subsampling. Very roughly speaking, with uniform data subsampling of 1 datum at a time, the expected squared norm of the gradient increases by a factor of between $1$ (no increase) and $ndata$, depending on how correlated the data are. (The less correlation, the larger the increase.) So, with subsampling, both Theorem 2 and Theorem 3 would increase by a factor of $1$ to $ndata$. (To be more precise, the second term in theorem 3 would be unchanged, though this would typically be dominated by the first term.) Those increases would then manifest as an increase of between $1$ and $ndata$ for both a and b in the quadratic noise bounds. After that point, exactly the same convergence results hold. We will add a discussion of this.
### Compare to [Xu&Campbell]
Thank you for pointing out this paper which we were not aware of. It does indeed provide a convergence guarantee for an algorithm similar to proximal-SGD using the g_{energy} estimator, and it was an oversight not to cite and discuss it. The principal differences with our work is (1) their guarantee is asymptotic (holds in the limit of large T) (2) their guarantee is local (holds when started close to the solution) (3) our analysis also considers the g_{ent} estimator, and (4) our analysis considers the convex and strongly-convex cases, whereas theirs considers the non-convex case . Their analysis does not consider the strongly convex (equivalently when p is strongly-log-concave) assumption so only gives a O(1/√T) rate [Thm 3 in their paper] rather than O(1/T). We will discuss this paper and revise our claims appropriately.
### Prox operator
The proximal operator has indeed been used in the past in VI [13] which we should note more clearly.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I believe the authors response has addressed all of my questions properly, so I'd love to raise the score to 8.
However, I found a mistake in the proof of Lemma 28, which could potentially cause some issue.
Line 670 reads:
$$\frac{1}{1-\theta^T} \leq \frac{1}{(1-\theta) T} = ...$$ Here $\theta:=\frac{1}{1+B \gamma^2} \in (0, 1)$.
Note that the LHS of the above inequality converges to $1$ as $T\to \infty$, while the RHS converges to $0$, which is incorrect. Although I don't think the proof is not fixable -- intuitively Lemma 8 gives the sensible rate, it's important to have the proof corrected since Lemma 28 is a key step for convergence results in convex cases (Thm7 and Thm 10), which I consider to be the key contribution of the paper.
Therefore, I'll keep my current score. If the proof is fixed, I'll update the score to 8, otherwise I'll reduce the score to 6.
---
Reply to Comment 1.1.1:
Title: Fixed issue
Comment: Dear Reviewer,
thank you very much for spotting this mistake in the proof. Fortunately, it is easy to fix (we provide details for you below) and this fix has no consequences on the results, besides a few multiplicative constants being modified.
The key is to prove that $1/(1-\theta^T)$ will be less than 2, provided $T \geq 2$ and that the stepsize is taken as $\gamma= A/\sqrt{T}$, for an appropriate choice of A. This is encapsulated in the revised Lemma which is below in a separate comment below. | Summary: The paper proves 1/sqrt(T) respectively 1/T convergence rates for black-box variational inference methods, when implemented with a proximal stochastic gradient method. Such rates were not available in the literature until now, due to difficulties in bounding the gradient noise. The main contribution of the paper is to prove new bounds on the gradient noise and incorporate them into the proof techniques for proximal SGD.
Strengths: The paper is well-written and clearly states its contributions. While the topic seems rather narrow, the technical part of the paper appears to be a novel and non-trivial result.
Weaknesses: 1) The paper solves a quite "narrow" problem, that is fixing some gaps in existing convergence proofs for variational inference. It is unclear whether this work will be of interest to the wider ICML community, as it is a niche topic, and the theory does not seem to directly suggest any practical improvements or changes.
2) BBVI is sometimes less preferable in practice than natural-gradient algorithms. These methods use the KL / Fisher-geometry (https://arxiv.org/abs/2107.04562). This line of works could be mentioned in the introduction.
3) There has been some recent works
https://arxiv.org/abs/2205.15902
https://proceedings.mlr.press/v202/diao23a.html
which prove convergence for BBVI-like algorithms by interpreting them as Wasserstein gradient flow. Perhaps these works could be mentioned.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Which algorithm would be preferable in practice, prox-SGD or proj-SGD? From the theory side, it seems they have similar convergence rates, but maybe there are other things to take into consideration?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: All limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
### Natural gradient algorithms
We agree this should be discussed. We will also mention that extending theory to address these algorithms remains an open problem.
### Related work by Diao et al.
We agree we should mention these recent works. (We note that they make use of Hessians which can be computationally challenging and is distinct from most practical BBVI algorithms.)
### Prefer prox-SGD or proj-SGD
The reviewer is correct when saying that from a theory side, prox-SGD and proj-SGD have similar convergence rates with respect to T. If we want to compare them further, we need to look closely at the multiplicative constants in the rates, which depend essentially on the constants (a,b) in the quadratic bound. Since prox-SGD needs the “energy” estimator and proj-SGD needs the “entropy” estimator, the question could be shifted to: which estimator has the best constants (for instance the lowest variance). From that perspective, the two estimators we present have roughly the same constants.
Another consideration in practice is the complexity of the projection vs proximal steps. In many cases, these will both be very cheap but in certain high-dimensional settings, the complexity of the proximal step could be higher (Θ(d^3) vs Θ(d^2), see lines 273-281).
This being said, the story could be completely different when considering *other* estimators. Intuitively, we could imagine that an estimator will be able to have lower variance if it is a stochastic estimator of the full objective function instead of just the energy function. That scenario would give an advantage to the proj-SGD algorithm. We have just now been able to prove exactly this for the “Sticking-The-Landing (STL)” estimator. We showed that it is quadratically bounded, and further that its constant “b” is directly related to how close to a Gaussian the target is. This formalizes the idea that the easier the problem is (here approximating an almost Gaussian with a Gaussian), the lower the variance of the estimator is. In the extreme case that the target is exactly a Gaussian, this variance “b” is exactly zero, meaning that the rates drastically improve from 1/T to exponential rates. In this setting, proj-SGD would be a better choice.
We propose to add this new small result in the paper, and to develop the discussion on the cost at the end of Section 4. We will underline the fact that the variance of the chosen estimator has a strong impact on the complexity (a well-known fact in stochastic optimization) and that some estimators can have a significantly smaller variance (like STL for easy problems).
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed answer!
Regarding Diao et al. necessarily requiring Hessians, this is not true. An expectation over Hessians can always be rewritten as expectation over gradients (via reparametrization trick). | Summary: The paper addresses the lack of provable convergence guarantees for black-box variational inference (VI) and proposes convergence guarantees for two stochastic optimization algorithms applied to Gaussian variational families.
The authors identify challenges in analyzing VI as a standard stochastic optimization problem, including the non-smoothness of the objective function, lack of uniform smoothness, and lack of uniformly bounded noise for gradient estimators.
They provide theoretical results and noise bounds for two gradient estimators and propose proximal and projected stochastic gradient descent algorithms.
I did not find technical flaws.
Strengths: 1. The paper addresses an important problem by providing provable convergence guarantees for black-box VI, which is widely used but lacks theoretical guarantees.
2. The authors clearly articulate the challenges in analyzing VI as a standard stochastic optimization problem, such as the non-smoothness of the objective function and the lack of uniform smoothness and noise bounds. The paper provides rigorous theoretical results and noise bounds for gradient estimators used in black-box VI.
3. The paper is well written. It provides a comprehensive and coherent review of the literature and clearly presents the technical contributions and proof ideas. I'm not a theory guy but I enjoy reading the paper.
Weaknesses: 1. I wonder if is it possible to perform some simple (even toy) examples to empirically check the results and illustrate the theory in a better way.
2. The dense Gaussian family is still limited (at least in my opinion). It is better to discuss whether the techniques developed in the paper can be applied to more general cases (e.g., black-box variational inference with a neural network model) or at least discuss the challenges.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We will respond to two points.
### Examples
We ask for some consideration for the constraints imposed by a 9-page limit. Given that this is a theoretical paper whose goal is to provide guarantees for algorithms already commonly used in practice, we prefer to focus entirely on theory.
### Dense Gaussians
We agree this is an important issue that should be discussed more in the paper. We emphasize that our new optimization results in section 3 apply more broadly than just Gaussian VI (and indeed apply more broadly than VI)—they apply to any composite objective consisting of a smooth part and a non-smooth part where (1) Either (1a) The non-smooth part is proximalable or (1b) can be projected onto a constraint set where it is smooth and (2) the gradient noise obeys a quadratic bound. It is conceivable that such guarantees could be shown for more general variational families, and if so our convergence results would provide “plug in” optimization guarantees. We suspect it might be fairly easy to establish such guarantees for, e.g., elliptical distributions or location-scale families, using similar proof strategies as for Gaussians. But doing it for much broader classes would definitely be a challenge—the properties (1) and (2) we use above were developed over several years in a sequence of papers. We propose to add a short discussion of this topic to section 5 of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I'm happy for the clarifications regarding how the theory applies to more general cases. I will keep my score but will increase my confidence. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper analyzed the convergence of Black-box VI, which has been widely used in variational inference in recent years. Under the assumption that the target joint distribution is convex or strongly convex and the variational posterior distribution is Gaussian, the convergence of the variational parameter obtained by Black-box VI to the ELBO optimal solution is first presented when using Prox-GD and Projected Gradient Descent. An important aspect of the analysis of Black-box VI is that the gradient is stochastic.
To deal with this, the authors introduced a class of quadratically bounded estimators as estimators of the gradient and combined them appropriately with the existing Prox-GD analysis.
Strengths: Although the proposed analysis makes a number of assumptions, the results are significant in that they include an important Bayesian model that is used in practice.
For the first time, an analysis of the convergence of Black-box VI is presented, albeit with a rather limited problem, such as the assumption of variational posterior distribution as Gaussian distribution.
Weaknesses: It was very difficult to understand how novel and important the proposed analysis is from a single reading. In existing studies, each component of the presented analysis is already well known, but I am not sure how important and novel the presented VI analysis is as the combination of those well-known results.
By assumption, since the objective function is convex (or strongly convex), the convergence using Prox SGD or other methods seems apparent, the difficulty of the problem stems from the noisy version of the gradient, as discussed in Sec 2.3. On the other hand, as there have been many studies discussing noisy versions of prox GD, such as Stochastic Proximal Gradient Descent[1], so I am not sure how novel this analysis is.
In the proofs of Theorem 6 and 7, I understood that the novelty is mainly the technique to treat the gradient estimator that satisfies the quadratic bounded property shown in Definition 4. However, I still did not understand how important it is compared to existing studies such as the Stochastic Proximal Gradient Descent study.
[1]Convergence of Stochastic Proximal Gradient Algorithm, L. Rosasco et al.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would like to know more about the novelty of this study in the context of Stochastic Proximal Gradient Descent in terms of technical aspects and assumptions of the proof.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the limitations of the statements are clearly explained.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review.
### Novelty of the analysis
Given the current state of knowledge, the convergence of Prox/Proj-SGD is not apparent, because the convexity of the objective function alone is not enough. A second essential hypothesis must be satisfied by the estimator of the gradient: typically, results require a noise bound that depends on the suboptimality gap, gradient norms and constants. This is for instance the case in the paper [1] suggested by the reviewer, and we also refer to [KR] for a good discussion on noise bounds that result from standard assumptions. Those standard noise bounds can be verified under the common assumption that the stochastic functions are (uniformly) smooth, or Lipschitz. But our problem is that our gradient estimators do not satisfy such a typical noise bound, but instead the quadratic error bound presented in the paper. There is indeed a significant literature on the analysis of Prox/Proj-SGD, but we carried out a thorough bibliographic review and consulted with experts to find that no existing analysis could accommodate such quadratic noise bound while at the same time handling a composite non-smooth objective. The goal of Section 2.3 in our paper is to explain to the reader exactly what in our paper is novel. We will try to explain this more clearly in the revision.
[KR] Better Theory for SGD in the Nonconvex World, Ahmed Khaled & Peter Richtarik, TMLR 2022
### Is the main novelty in treatment of the gradient estimators?
Broadly speaking, yes. Our proof technique borrows several ingredients from previous analyses of SGD, and the way we handle convexity and smoothness is rather classical. The main difficulty we faced is how to handle the quadratic noise bound together with composite non-smooth objective.. For this we needed a relatively new weighted telescoping technique, the application of which involves substantial technical difficulties (see proofs on pages 21-30 of appendix).
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed clarification. My concerns are now solved. I will increase the contribution score. | null | null | null | null | null | null |
Reward Finetuning for Faster and More Accurate Unsupervised Object Discovery | Accept (poster) | Summary: The paper addresses the research question of unsupervised 3D object location detection from LIDAR data in autonomous driving scenes.
The proposed method, DRIFT, improves upon the MODEST baseline by incorporating heuristics for judging objectness likelihood and using them as rewards within a reinforcement learning framework.
DRIFT is evaluated on the LYFT and Ithaca benchmarks, demonstrating enhanced performance and training efficiency compared to the MODEST baseline.
**Before rebuttal**:
The paper presents a promising contribution and could be a valuable addition to NeurIPS this year. However, since my familiarity lies more in the domain of object detection from 2D scenes, I may not be able to provide detailed insights into the previous work in the LIDAR domain.
Strengths: **S1.** The paper is well-written, well-executed, and effectively addresses the under-explored problem of unsupervised 3D object location detection in autonomous driving scenes. The proposed method, DRIFT, demonstrates the potential for real-life applications.
**S2.** The proposed idea is well-grounded and logical. The incorporation of studied priors based on shape, size, and location to judge objectness likelihood aligns with human intuition. The utilization of a feedback loop through reward-based fine-tuning to improve model performance over time is a sensible approach.
**S3.** The reported improvement achieved by the authors is significant. Despite training for only 30 epochs (compared to MODEST's 600 epochs), DRIFT outperforms the strong baseline on both datasets. Although there is still a noticeable gap when compared to supervised methods, the results demonstrate promising progress.
Weaknesses: **W1.** It would have strengthened the paper to provide a context within the well-studied objectness literature [1-2]. The concept of objectness aims to identify low-level, generic cues that distinguish foreground regions from the background, such as edge distributions, boundary textures, and likely object size, shape, and location. The authors' goal aligns with this objective, albeit from a different modality (LIDAR). Drawing inspiration and techniques from the objectness literature could have been insightful.
**W2.** An explanation of how the authors generated the shape templates (priors, prototypes) would have been beneficial. It is unclear how these templates were derived or selected.
**W3.** Given the reliance on multiple heuristics in the method (which is reasonable and sound), it would have been valuable to investigate cross-dataset generalization, specifically from LYFT to Ithaca. Understanding how the measured statistics change across different domains and driving scenes would provide insights into the method's robustness and applicability.
[1] BING: Binarized normed gradients for objectness estimation at 300fps, https://mmcheng.net/mftp/Papers/ObjectnessBING.pdf
[2] Survey and Performance Analysis of Deep Learning Based Object Detection in Challenging Environments, https://www.mdpi.com/1424-8220/21/15/5116
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses. More like a suggestion:
Please state from the very start that your goal is to find generic (foreground vs. background) object regions, and NOT categorization. This maybe confusing to some readers like me, that when you use the term "Object Discovery", I look for a semanic categorization component as well.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and appreciation of our contributions! We address individual questions below:
> Include context within the well-studied objectness literature
We thank the reviewer for pointing out the comprehensive works [1, 2] and the explanation of the concept of objectness. Indeed, objectness is aligned with our goals in this work, especially its goals to identify how likely something is an object in *any category*. In particular, we were surprised to learn about the metrics that are used in this field of work that’s explicitly designed to handle class-free proposal generation, although from a different modality. In the future, we will use inspiration from these works to evaluate the works we do in this direction. We thank the reviewer for bringing this subject to our attention and will include extensive discussion into objectness estimation as prior for our work.
> Explanation of how shape templates were generated
We show category-wise object size statistics (unit: meter) in the table below in the format of $\mu$ ($\sigma$):
| Dataset | Category | Length | Height | Width |
|--|:--:|:--:|:--:|:--:|
| Lyft | Truck | 9.40 (3.15) | 3.30 (0.43) | 2.83 (0.27) |
| Lyft | Cyclist | 1.75 (0.33) | 1.36 (0.34) | 0.61 (0.26) |
| Lyft | Car | 4.74 (0.56) | 1.71 (0.25) | 1.91 (0.16) |
| Lyft | Pedestrian | 0.80 (0.18) | 1.74 (0.18) | 0.78 (0.15) |
| Ithaca365 | Truck | 6.09 (2.57) | 2.33 (0.77) | 2.22 (0.48) |
| Ithaca365 | Cyclist | 1.75 (0.64) | 1.52 (0.51) | 0.71 (0.22) |
| Ithaca365 | Car | 4.41 (0.22) | 1.55 (0.15) | 1.75 (0.12) |
| Ithaca365 | Pedestrian | 0.60 (0.19) | 1.70 (0.14) | 0.61 (0.12) |
We selected our templates of the shape priors as well as the number of prototypes based on the Lyft L5 dataset statistics, following prior self-driving works using the classes: cars, pedestrians, cyclists, and trucks (Supplementary, Tab. 1). In this work, we assume domain knowledge over object sizes for which we wish to discover. However, we extensively ablate how much the priors and prototypes affect the final results, and find that our method is not sensitive to the values (Supplementary, Tab. 8), and generalizes to the Ithaca365 dataset *without changing the values* (Table 2). In addition, we found the method is not sensitive to the number of prototypes. We include this ablation in the response to reviewer dARK and again below:
| Num. Factors | IoU 0.5 | IoU 0.7 |
|--------------|:-------:|:-------:|
| 4 (GT) | 38.3 | 23.1 |
| 5 | 37.0 | 23.0 |
| 6 | 36.0 | 22.7 |
Including additional prototypes at 10, 15-meters doesn’t significantly change the results, following the intuition that the method will search for the size prototype that best describes/fit the dynamic points.
> Cross-dataset generalization of heuristics (Lyft to Ithaca365)
We calculate the true $\mu_{scale}^*$ and $\sigma_{scale}^*$ from dataset statistics:
|Dataset|$\mu_{scale}^*$|$\sigma_{scale}^*$|
|--|:--:|:--:|
|Lyft|0.9428|0.2772|
To confirm, we selected all of our heuristics and hyperparameters on the Lyft dataset, and directly transferred the method into the Ithaca365 dataset *without changing the values*, suggesting at least some generalizability in the method.
> Goal ambiguity from the introduction text
We thank the reviewer for pointing out this possible cause of confusion, and will ensure that our final wording is clearer in our objectives.
[1] BING: Binarized normed gradients for objectness estimation at 300fps
[2] Survey and Performance Analysis of Deep Learning Based Object Detection in Challenging Environments
---
Rebuttal Comment 1.1:
Title: Still positive
Comment: Dear all,
After going through whole post review text, I remain positive for this submission. | Summary: The proposed DRIFT framework is an approach to realize object discovery without labels. DRIFT first extracts foreground proposals based on the PP-score method, and then leverages common-sense heuristics including shape prior, box alignment, and background point filtering, to reward proposed boxes. Reinforcement-learning-based optimization is adopted, maximizing the reward function in a local space. The experimental results demonstrate the superiority of DRIFT over prior self-training methods in terms of efficiency and generalizability.
Strengths: - The framework for 3D object discovery that simplifies common-sense heuristics as Gaussian-based reward signals to fine-tune the detector is interesting, straightforward, and relatively novel. Since the object discovery paradigm does not incorporate pre-defined class labels, it is promising in out-of-domain object perception tasks.
- Experiments are basically comprehensive, with ablations on key reward components and plenty of hyperparameters.
- The paper is well-written, and the ideas are presented clearly, making it easy to follow the authors' arguments and understand their contributions.
Weaknesses: - The authors mainly compare DRIFT with proposal-based (PP-score) baselines.
- A missing related work for comparison: [a]. [b] is a concurrent work, but is valuable to discuss it though a direct comparison is not feasible.
- The PP-score method requires multiple traversals to get a superior performance which could limit the generalizability in real-world applications. This helps DRIFT to achieve a closer out-of-domain supervised detector trained on KITTI in the Ithaca365 dataset. But in normal datasets without many traversals, DRIFT actually has a large gap compared to the out-of-domain supervised detector.
- Current heuristics behave poorly in pedestrians, as shown in the per-class BEV mAP of the pedestrian class in Fig. 6 & Supp-Fig. 1. Heuristics need careful designs and more sophisticated ones are probably necessary to achieve satisfactory results on all categories of objects.
- The reviewer is curious about the training efficiency. Any more insights about why the proposed DRIFT converges much faster than self-training and MODEST?
- Any more insights about the large StD. in Table 1?
- Misc: Fig. 1 is not referenced in the main body.
> [a] Motion Inspired Unsupervised Perception and Prediction in Autonomous Driving. ECCV 2022.
>
> [b] Towards Unsupervised Object Detection from LiDAR Point Clouds. CVPR 2023.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have discussed limitations at the end of the paper. The discussion is valuable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive reviews and thoughtful feedback. We address the individual points below:
>Missing related works
Thank you for pointing to these papers. Unfortunately we did not find code for them, and thus could not compare with them during our rebuttal. We will instead include discussion on them in our final version.
> Generalizability in real-world applications
We would like to highlight that in the real world, it is common for people to follow the same routine, or for different people to share the same route that can be pooled together to produce the multiple traversals. Therefore, while not all publicly available datasets contain multiple traversals, it is highly likely that they become accessible in large quantities when developing self-driving cars in practical contexts.
> Performance on Pedestrians
The limited performance on pedestrians can be mainly attributed to two reasons: First of all, DRIFT itself does not predict classes for its detections, and we could only assign class labels during post-processing based on the most similar shape priors. Sometimes pedestrians are mislabeled as cyclists and vice versa, as they often have similar box sizes. Secondly, the classes are highly imbalanced in both datasets. For example, the Lyft dataset contains 92.47% cars and 5.91% pedestrians. This limits a detector’s ability to learn to detect the smaller classes. We leave this study to a future work.
> Training efficiency
One potential reason is that DRIFT guides boxes to adhere to common sense heuristics using the reward function throughout the entire training process, while self-training and MODEST only leverage them for seed label generation and (for MODEST) filtering between self-training episodes. Therefore, DRIFT more effectively harnesses the heuristics, and leads the detector to learn to predict appropriate boxes much faster. For additional details, see response to reviewer dARK, Similarity and advantages over self-training methods.
> Large Standard Deviation in Supp. Table 1
The large StD.s can be caused by several reasons: First of all, there are inherent size variations among objects within the same class. For example, a compact car and a SUV both belong to the Car class but have distinct dimensions. Additionally, values in Supp. Table 1 are computed from the box shapes in the Lyft dataset, and there are variations in human annotations.
> Missing reference for Supp. Figure 1
Thanks for pointing this out! We will add a reference for it in our final version.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal Comment
Comment: Thanks for the feedback.
**[Re: Large Std.]** My intent was to ask the large Std. in Table 3 compared with other models in the main paper. The number in my original question was a typo. Sorry for the mistake. Do you have more insights about the large StD. in Table 3?
---
Reply to Comment 1.1.1:
Title: Insights into Table 3 StD.
Comment: In general, bad boxes consistently yield low rewards, as seen in the low mean and low StD. of the "rand boxes" row in Table 3. Good boxes tend to get high rewards on average, but the rewards may vary across different boxes because the signal used to compute the rewards can be noisy though still correlated with box quality. For instance, one part of our reward function, dynamic/static point counts, varies depending on the proximity of the box to the LiDAR sensor, since LiDAR point density decreases with range. As another example, the P2-score we used as an approximate foreground segmentation is a noisy heuristic for the dynamic points. Thus while the rewards are correlated with actual box quality (as shown by the high mean reward for the highest quality boxes, i.e., the ground truth), the correlation is not perfect. That said, note that this large StD. has a minimal impact on DRIFT, as the top *k*% filtering retains mostly good boxes, allowing the detector to learn to identify dynamic objects from them. | Summary: This paper proposes a new reinforcement-learning-based framework for unsupervised 3D object detection that uses these common-sense heuristics directly as a reward signal. Avoid handcrafting training examples for each object detector. Furthermore, under the premise of greatly accelerating the convergence speed of the model, this method improves the experimental effect.
Strengths: 1. The proposed reinforcement learning-based framework without encoding heuristics into differentiable loss functions and avoids the need to hand-engineer training paradigm.
2. The method proposed by the author has a fast convergence speed and a great algorithm implementation value.
3. The experiments in this article are relatively sufficient, and the charts are easy for readers to understand.
Weaknesses: 1. The content arrangement of this article does not seem to be very convenient to read and there are some careless quoting errors that refer to Questions.
2. The author should provide further explanation or derivation in the formula part of the article.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In line 40, can it be understood that the method proposed in this article is to perform finetune on the pre-trained MODEST through a reward-based method? If the answer is yes, is the time comparison in Figure 1 meaningful or fair? Moreover, LLM is mentioned more than once in the article, but this article does not seem to use a specific LLM model. I think this may cause some misunderstandings for readers at the beginning.
2. The method proposed in this paper is more like a specific RAFT method in unsupervised object detection, and the novelty may be limited.
3. The training steps mentioned in Line 56. As far as I know, many self-training methods can achieve the same goal, please tell us your advantages.
4. Lines 126 to 132 in the article seem to give the boxes from the detector a priori information or heuristic constraints from the real world. You are modeling here by a mixture of Gaussian distributions, how to determine mixture weights? In line 115 section. What happens if you use a mixture factor greater than 4? It seems that the model only focuses on objects of these sizes. In line 131, what happens if you don't use the scale operation on it? please give an example.
5. Is Figure 3 drawn from a real dataset? If so, please describe which data were used in the supplementary material. Can this be understood as prior information derived from daily data?
6. The paragraph starting on line 140 means that by scaling, more points are included in the box, this seems to serve the same purpose as jittering mentioned in Sec 3.2 or Intro. What is the difference between them?
7. The sensitivity analysis of mu_{scale} in line 156 should be Tab5.
8. The ablation experiment of Tab4 should complement the other 3 groups and did not analyze the reason for this phenomenon, especially when the effect gap is huge. Because I want to know how the three components of the proposed Reward function affect the detection performance respectively
9. Please confirm the experimental code for reproducing the MODEST in Tables 1 and 2. Whether there is a P2 filtering result seems to be far from the original MODEST paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: One limitation is that this paper uses MODEST as the baseline. If the number of timestamps is not large, continuing to use the p2 score may aggravate the impact of noise on the model. This is a direction that needs to be addressed. Another limitation is that since it is unsupervised 3D object detection, not only static mobile objects should be detected, but all static objects should be detected, which may be beneficial to downstream tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments, and will incorporate all of the edits/analysis that they suggest. Regarding the mention of LLMs, our method draws inspiration across many different fields (Reinforcement Learning, Object discovery, RLHF) and we had hoped to showcase an example of where such a bridge in these different fields has proven successful. We thank the reviewer for pointing out this potential point of confusion, and will clarify it in the final revision.
> Finetune on the pre-trained MODEST clarification
Our method is not fine-tuning a MODEST detector, and the time comparison in Fig. 1 is correct. The x-axis refers to the number of training epochs *from scratch* needed to obtain the values. The model we are fine-tuning from is pre-trained on noisy seed labels produced by DBSCAN clustering on spatial and PP-score (L203-206), plotted as the yellow dot in Fig. 1.
> Novelty may be limited: similarity to RAFT
This work follows along a broader branch of works commonly known to the Reinforcement Learning (RL) and Controls community as Filtered/Top-K behavior cloning [1]. However, while this field is *well studied for controls applications*, we consider our work a pioneering effort in bringing its advantages into object discovery. The work RAFT [2] also cites RL as inspiration for improving generative modeling. However, the techniques of RL have never been used for object discovery; we demonstrate that by leveraging its tools to optimize non-continuous and non-differentiable rewards, we can obtain incredible performance (tables 1, 2). In summary: 1) we explore the task of object discovery as compared to generative modeling, 2) we formalize novel reward and exploration functions for the setting, which has never been studied before. We additionally hope to point out that, at the time of the submission, the work RAFT had not been published at an official conference venue, and should be considered a concurrent work to ours.
> Question regarding size prior mixture of Gaussian distribution
We comment about shape templates and their generalizability in the general response, and additionally address it here. The mixture weights that we use are computed such that the probabilities at the Gaussian means are equal for stability reasons (L131). We show that the method is robust to mixture weights (Tab. 8) as well as number of mixture factors (general response).
> Reward component analysis
We analyze the components of the reward function in Tab. 4, Fig. 5, and provide additional discussion into the findings. Removing the filtering step causes the object detector to be unable to distinguish between foreground objects and static background, thus the drop in performance (Tab. 4, L3). To better understand the reward ablation, we visualize the resulting predictions in Fig. 5 of the main paper. Removing the size reward causes the model to place bounding boxes of incorrect shape (Fig. 5, lower left). Removing the alignment reward results in boxes that have points in the center of the box, optimizing for large boxes that capture all the dynamic objects (Fig. 5, lower right). Because these failure cases are size and shape specific, the metric of mAP at IoU 0.7 and IoU 0.5 cannot capture these kinds of errors, thus the large drop in perceived performance (Tab. 4, L2&4).
> Clarifications about Figure 3
Figure 3 is plotted from real data, specifically the Lyft L5 object detection dataset. We will include this in the final version, and thank the reviewer for pointing out this error.
> Clarification about scale verses jittering
To clarify, jittering is exploration of the box proposals to search for potentially better candidates, while scale is referring to the ratio near a particular box used to calculate the reward associated to the box. All boxes that were produced from jittering considers their own o(b) set of points when computing the reward (and thus, the scale), and the best boxes from all jittered boxes are used to update the model (L139-140, 170-172).
> Clarification about baseline reproduction
We report the results on mAP at IoU 0.5 and 0.7, which is different from the results reported in MODEST (IoU 0.25, 0.5). We made this choice to follow the precedent set in prior object detection works, KITTI dataset and Lyft dataset. We used the official implementation for baseline reproduction. We will provide results for IoU 0.25 in the final submission, but note that at such a low IoU the metric captures only the localization accuracy of the model. Our work is able to localize objects as well as predict the correct shape completion, as shown in the higher performance at the more strict IoU matches (Tab. 1).
> Clarification regarding number of timestamps and impact of noise
We hope to clarify that the computation uses repeated traversals as opposed to consecutive timestamps to compute P2-score, and is actually more robust to noise per-sequence in the current traversal as demonstrated by [3, 4].
> Limitation: discovery of static objects
We agree with the reviewer that this is a limitation of the current method (L304-305); however, we are limited by the data and labels that are present in an academic setting. In this work, we focus on object discovery for self-driving following prior self-driving object detection tasks. We note that there are only labels for dynamic objects, which is necessary for evaluation. In theory, however, one can define a reward function for any objects they desire to discover, both static or dynamic, and leave this to future works (L307).
[1] Kumar et al. 2022. When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
[2] Dong et al. 2023. RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment (in submission)
[3] Barnes et al., 2017. Driven to Distraction: Self-Sup. Distractor Learning for Robust Monocular Visual Odometry
[4] You et al., 2022. Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception
---
Rebuttal Comment 1.1:
Comment: I appreciate the efforts the authors spent on their rebuttal! It solves some of my concerns. But I have two other questions.
1. As mentioned by reviewer iC9C, 'Since the approach is supposed to find objects unsupervised, hyperparameter tuning and then manually checking the results by a human who looks at the bounding box detections in the point cloud may improve the results but not in a fair way. It goes against the idea of unsupervised object detection.' While your answer is that these parameters are very robust to other datasets, this seems to be at the expense of learning from the data, can you give another explanation?
2. The scoring function designed in Sec3.1 seems too heuristic. In 2d unsupervised object detection, there are some works like[1] which leverage VLMs to generate or filter pseudo labels, can you refer to these ideas to further improve the effect of the model or reduce the complexity of the model?
But in my opinion, I think the novelty of this paper is still limited, and I insist on my points for now.
[1] Zhao, Shiyu, et al. "Exploiting unlabeled data with vision and language models for object detection." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
---
Reply to Comment 1.1.1:
Title: Thank you for your response.
Comment: We respectfully disagree that this method lacks novelty; it’s a compellingly simple, effective, and robust framework, as shown in the variety of ablations both in the paper and in the subsequent rebuttal. However, we agree that there are *ways that the method can be expanded* within this framework. Part of our contribution lies in the general framework that can leverage functions of any type, differentiable or not, in an efficient manner, and bridges the concept of exploration mechanism, common for the field of RL, successfully into object discovery.
We thank the reviewer for pointing to another way we can extend the reward and exploration function to incorporate additional modes of information such as language [1]. While out of scope for this work, our future works aim to explore reward/exploration functions that are appropriate for such settings (L307-308). However, our current contribution re-thinks the problem of object discovery in a fundamentally new way, and in doing so, yields incredible performance gains. Indeed, reviewers iC9C, vhou, and Baoq consider the method to be novel and a valuable contribution, and we would be happy if reviewer dARk can reconsider their review under this light.
> Hyperparameter tuning, domain knowledge, and learning from data.
We base the selection of many of our hyperparameters on human domain knowledge, which does not necessarily need direct learning from the data. For instance, we can readily obtain dimensions of transportation tools from online sources [2], and estimate human body shapes using health statistics reports from the CDC [3]. This strategic application of domain knowledge enables us to scale to large amounts of scenes and does not require human involvement for each individual scene. We contend that to do unsupervised learning, we must assume some knowledge, either from domain knowledge, learned from human labels, or distilled from other datasets. In this work, we assumed access to simple and easily accessible domain knowledge and believe it is the most scalable solution.
> Leveraging VLMs to generate or filter pseudo labels
To clarify, the scoring function is the objective we wish to optimize. Proposals (i.e., pseudo-labels) are guided by exploration to encourage identifying boxes that can improve the scores (i.e., reward) that are obtained. Better proposals will help get the model to better regions quicker, thus increasing the efficiency of the method, but ultimately the objective of the method is to maximize the reward. Under our framework, if the aim is to improve the reward function, one can use VLMs to lift features from 2D scene images, then associate it with another model to the 3D points. The final reward can be some function that encourages detections around features that correspond to VLM features. This would be the most direct analogy to how the work [1] utilized the VLM features. However, this would require a 2D-to-3D model, since all VLMs are currently trained under the 2D image domain, and at this moment, none exist for 3D data. In this way, our current reward formulation would be the most simple and straightforward, but we are incredibly encouraged if such a potential signal should, in the future, become available. We thank the reviewer for pointing out this interesting work and will expand on discussion regarding it.
[1] Zhao, Shiyu, et al. "Exploiting unlabeled data with vision and language models for object detection." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[2] https://www.dimensions.com/classifications/transport
[3] Fryar CD, Kruszon-Moran D, Gu Q, Ogden CL. Mean Body Weight, Height, Waist Circumference, and Body Mass Index Among Adults: United States, 1999-2000 Through 2015-2016. Natl Health Stat Report. 2018 Dec;(122):1-16. PMID: 30707668. | Summary: This paper proposes DRIFT, a novel reward fine-tuning method for unsupervised object discovery with point cloud input. Specifically, three reward methods are proposed to identify good bounding boxes. First, shape prior reward prefers bounding boxes with similar sizes to the prototypes. Second, an alignment reward gives high scores to boxes that have most of the LiDAR points near the box edge. Third, a filter reward follows the spirit that a good box should contain more dynamic points than background points. By using these reward methods for refining the object detector, DRIFT achieves more accurate object discovery, and it also converges much faster than the previous method.
Strengths: (1) The proposed reward fine-tuning method is well-motivated and improves object discovery accuracy compared to the previous work.
(2) In addition to improving the detection performance, the proposed method can also greatly improve the training speed.
(3) The expression of the paper is clear, and the figures are intuitive.
Weaknesses: (1) Although intuitive, I think the prior that most points fall on the lateral surfaces of an object is too absolute, e.g., a lot of LiDAR points will fall on the hood and front window of the oncoming vehicles beside lateral surfaces. Apart from this, using Gaussian distribution as an approximation is also not accurate, because few points fall out of the box edge, as shown in Figure 3.
(2) I think a missing ablation is the contribution of the exploration strategy to the final detection performance, similar to the ablation on the reward methods in Table 4.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) Authors should provide complete experiment results and refine their experimental analysis correspondingly, as some of the experiments are still running at the time of submission (L234 & L258).
(2) In L224, the authors say they include evaluation results with IoU at 0.25 in the supplementary, but I don't see these results. Authors should add these results and explain why they don't provide these results in the main paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors provide limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > About using the Gaussian distribution as an approximation
We agree with your point that the Gaussian distribution approximation might not be the most accurate one, but from our observation the hood or front window of vehicles only contribute to a small fraction of the point cloud (Figure 2 in the main paper shows an example) since lidar reflections are sparser on glasses or surfaces parallel to the beam. We also contend that this approximation is one of the simplest, robust approximations while yielding satisfactory performance. We showed ablation study in Table 5 in the main paper, Table 2 and 8 in the supplementary material about robust results of varying the Gaussian parameters. We hope this could serve as a first but simple and strong step, and encourage further research.
> About the contribution of exploration strategy:
We note that we have included Table 6, 7, 8 in the main paper, Table 4, 5, 6 in the supplementary material for contributions of different exploration strategies. In addition to Table 6 in the main paper, we note that if we do not use any exploration at all, the detection performance will drop to 0 (we will add an additional line in Table 6 for this in the final version) due to confirmation bias (which is a key problem with self-training), in which case the model gets indulged in is own proposals and the training loss cannot provide meaningful gradients.
> Final number
We report our results for up to 300 epochs below:
| | IoU 0.5 | IoU 0.7 |
|-----------------|:-------:|:-------:|
| No Finetuning | 23.9 | 10.5 |
| MODEST (600 ep) | 39.6 | 18.8 |
| DRIFT (30ep) | 38.3 | 23.1 |
| DRIFT (60 ep) | 41.8 | 26.7 |
| DRIFT (120 ep) | 45.3 | 29.6 |
| DRIFT (180 ep) | 43.2 | 30.2 |
| DRIFT (300 ep) | 44.6 | 31.3 |
We observed that the method converged soon after the final reported numbers at 120 epoch. We will include the final number into the report.
> About results for IoU 0.25:
Thanks for pointing this out! We accidentally missed this in the supplementary material, and we provide them below. We will add this to the final version.
Table Results for IoU 0.25
| | 0-30 | 30-50 | 50-80 | 0-80 |
|----------------------|:----:|:-----:|:-----:|:----:|
| No Finetuning | 63.5 | 34.9 | 6.0 | 37.5 |
| Self Train. (60 ep) | 67.7 | 43.2 | 8.7 | 43.1 |
| Self Train. (600 ep) | 67.7 | 48.0 | 13.3 | 45.5 |
| MODEST (60 ep) | 68.5 | 46.2 | 10.5 | 45.1 |
| MODEST (600 ep) | 73.6 | 56.8 | 21.0 | 53.6 |
| DRIFT (60ep) | 72.3 | 51.5 | 19.2 | 50.7 |
| DRIFT (120 ep) | 72.5 | 51.7 | 25.8 | 52.9 |
| Supervised on KITTI | 78.6 | 53.9 | 26.1 | 55.3 |
| Supervised on Lyft | 81.8 | 63.6 | 40.0 | 64.2 |
One thing to bear in mind is that IoU 0.25 metric is more evaluating “localization”, i.e. if there is a bounding box with a tiny overlap with the GT. However, our method excels at the size and proper orientation, which is better captured at higher IoUs metrics. Basically, MODEST can put a box, but it’s not able to figure out the size, but our method does both. As stated in the main paper, we use 0.5 and 0.7 for 1) following the KITTI and Lyft standards of reporting, 2) emphasizing the strength of our method. | Rebuttal 1:
Rebuttal: We express gratitude to the reviewers for their constructive feedback on our work and appreciate their acknowledgment that the writing is "well written" and "easy and intuitive" to follow [iC9C, ZfQG, vhou, Baoq]. To reiterate, our work introduces a novel adaptation of Reinforcement Learning (RL)-based methods for unsupervised object discovery from LiDAR points, which surpasses prior works in both accuracy and training efficiency. We appreciate the reviewers' recognition that "[our method] improves performance by a large margin in a very novel research area" [iC9C] and that our method is noted to "significantly improve over [prior works]" by all reviewers. Reviewer dARk concisely characterizes our work as a reinforcement learning-based framework that mitigates the need to encode heuristics into differentiable loss functions. In summary, we are thankful that reviewers have found our work to be "straightforward" [iC9C, vhou] and "well-motivated" [ZfQG, Baoq].
Additionally, we present a general comment regarding the advantages over self-training methods:
Our method, DRIFT, significantly outperforms vanilla self-training for object discovery and converges at a much faster rate (Table 1, 2, Figure 1: Self-Train baseline). Traditional self-training iteratively generates pseudo-labels and retrains the model, requiring convergence before generating the next set of pseudo-labels. In unsupervised scenarios, training a detector to mimic pseudo-labels from a model lacking ground truth supervision can lead to undesirable artifacts, further amplified by repeated training (confirmation bias). Our method mitigates the problem by drawing on the field of RL: the exploration component is crucial for our method (shown below), and is not present in traditional self-training methods. By performing local exploration instead of simply updating from its own predictions, DRIFT avoids confirmation bias and ensures that labels improve over what it predicts. Thus, DRIFT is able to perform updates per-training iteration as opposed to per self-training round (which is generally 60 epochs, times number of iterations) which allows it to converge significantly faster and achieve higher performance.
| | IoU 0.5 | IoU 0.7 |
|--------------|:-------:|:-------:|
| No Exploration | 0.0 | 0.0 |
| W/ Exploration | 41.8 | 26.7 |
We also provide a discussion on the shape templates and their generalizability, brought up by reviewers dARk and Baoq:
The setting we’re studying assumes domain knowledge over object sizes for which we wish to discover, and we follow prior self-driving works using the classes: cars, pedestrians, cyclists, and trucks (Supplementary, Tab. 1). Our main results use the ground truth mean and variance class sizes computed on the Lyft dataset, and we show that it can generalize to the Ithaca365 dataset *without changing the values* (Table 2). We also ablate the mixture weights in the Supplementary and show that the results are not sensitive to the values chosen (Table 8):
| Shape St. Dev. | IoU 0.5 | IoU 0.7 |
|------------------|---------|---------|
| 0.5 * I St. Dev. | 34.5 | 21.3 |
| 0.2 * I St. Dev. | 37.1 | 25.5 |
| True St. Dev. | 38.3 | 23.1 |
Per reviewer dARk’s suggestion, we further ablate the number of mixture factors (class size priors) greater than 4. We compare to the ground truth number of mixture factors (4) to increasing the number of mixture factors at constant standard deviation and mixture weights:
| Num. Factors | IoU 0.5 | IoU 0.7 |
|--------------|:-------:|:-------:|
| 4 (GT) | 38.3 | 23.1 |
| 5 | 37.0 | 23.0 |
| 6 | 36.0 | 22.7 |
Here, we add in “pseudo-class” size priors that are larger than the largest class at 10 and 15-meter lengths, which do not exist in the data. However, even in such extreme cases we note that our method is robust to the number of mixture factors. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work contributes to unsupervised object discovery in LIDAR point clouds. It defines a few typical properties of customary bounding boxes in LIDAR point clouds and then develops Rewards for a Reinforcement Learning algorithm to learn this, lacking a gradient for direct optimization.
It is based on Persistency Prior scores defined by the MODEST approach but adds several steps to improve bounding box selection. Furthermore they ablate many factors, e.g. the impact of enforcing a certain shape on bounding boxes. While they can only compare on a limited amount of epochs due to time constraints, possibly due to the plenty ablation studies, they show they outperform the existing approach in that domain.
Strengths: The paper improves performance by a large margin in a very novel research area. Even if the approach itself may have limited use, it is straightforward to see the value e.g. in combination with a pre-trained traffic participant classifier.
The easy and intuitive explanation of the reward shaping is very clear. Figure 2 helps to get a very quick insight into the idea. The whole paper does a good job at combining intuitive explanations with mathematical formulations.
The approach has many individual steps and the pseudo-algorithm helps understanding.
The used Lyft Level 5 Perception dataset is a good choice for comparing with the state of the art.
The ablation studies are suitable to show that all used steps together achieve the performance.
All figure captions guide the reader nicely towards the main message that figure should convey.
Weaknesses: It seems this work takes many ideas from MODEST so the idea of using an object detector and defining common sense properties is not novel to this work. It needs to be read with that contribution offset in mind.
The title is maybe a bit boasting. Teaching cars to see sounds like solving most if not all perceptual problems while this approach, coming from unsupervised object detection, only separates moving objects from background.
Having still a bit to train is not optimal but understandable. The paper feels a bit rough around the edges, the DRIFT abbreviation is explained twice.
The approach is a bit complex with many parameters to tune. Even though the ablation studies show the value of all individual components, it is hard to judge how much hyperparameter optimization vs. unsupervised modeling performance is present.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 2 Related Works: 3D Object Detection: I would disagree it is ideal to train detectors in an unsupervised way. Maybe ideal could be a combination to achieve best performance at reasonable costs while being able to assign actual labels (car, pedestrian, etc.). Maybe a statement like that could be just removed.
How robust is the algorithm to the amount of traversals? This is not touched at all unless I overlooked it.
Jittered boxes and then NMS has strong similarity to region proposals in approaches like Mask R-CNN. Maybe it could be good comparing this historic approach in the Related Work.
What is the impact of the quality of the first pre-trained detector? If there is a false negative detection, how can this algorithm improve a non-existing bounding box?
In Reward Finetuning for Model Alignment, what are "human values"? Do the models output unexpected or wrong output or are there ethical considerations? The term seems confusing.
How have the lambda factors and mu_scale and sigma_scale been found? Similarly how was the scale of the scaled up box selected when designing the Alignment Reward? Since the approach is supposed to find objects unsupervised, hyperparameter tuning and then manually checking the results by a human who looks at the bounding box detections in the point cloud may improve the results but not in a fair way. It goes against the idea of unsupervised object detection.
In 3.2 what is meant by "In effect, this encourages all non-persistent points ... to propose boxes". How do points propose boxes? This is not like Yolo where each grid cell is associated with a fixed number of region proposals right?
Couldn't this approach potentially be expanded to actually label Car, Pedestrian, Cyclist and Truck based on the shape priors?
It would be worthwhile investigating the impact of false negatives to understand the impact of the approach vs. the failure of the first step of bounding box proposal. The authors did compare different ways of box predictions but that is different from quantifying the performance. Knowing how many false negatives are produced by the first step would already give some insight into this. Or is that the No Finetuning case?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: There is only one comparable work which is an unavoidable limit. The authors solved this by also comparing against other tasks, i.e. the supervised case.
Due to the deadline DRIFT was not trained to convergence, however it seems to outperform it's competitor already. For the camera ready final numbers should be entered.
In their limitations the authors address most of the concerns in Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! We address the individual points:
> Novelty: compared to MODEST
We highlight that although MODEST and DRIFT both use commonsense properties, MODEST uses them only when generating seed labels and filtering between self-training rounds. In contrast, DRIFT directly incorporates common sense rules throughout the training process using the reward function, which is a significantly more efficient way of leveraging the knowledge. In addition, DRIFT takes a fundamentally different approach towards object discovery by leveraging imitation learning tools from Reinforcement Learning that are designed to handle the non-continuous, non-differentiable nature of the commonsense properties. In doing so, DRIFT displays significantly faster convergence and stronger performance than MODEST (Fig. 1).
> Title and writing
We will adjust our title to make it more appropriate and descriptive of our task. We will remove the statement that it is ideal to train detectors in an unsupervised way and the repetitive explanation of the abbreviation.
> Robustness to the amount of traversals
The Lyft dataset has around 5 repeated traversals per location, and Ithaca365 has around 20 traversals per location. DRIFT performs competitively on both datasets, indicating that it is robust to the amount of traversals, and the number of traversals needed does not have to be very large.
> Comparison with Mask R-CNN
Thank you for pointing this out; we will include corresponding discussion in our final version.
> What is the impact of the quality of the first pre-trained detector?
The pretrained detector can have bad quality (e.g. Tab. 1 “no finetuning”), but it needs to have some functionality and cannot be completely randomly initialized. A pretrained detector with better quality may further improve DRIFT’s performance.
> In Reward Finetuning for Model Alignment, what are "human values"?
The meaning of “human values” is context dependent. It could refer to output quality such as accuracy or coherence (e.g. [41] in Related Works) or ethical considerations (e.g. [30] in Related Works), etc. In the context of our task it refers to the common sense rules that the boxes should follow. We will make this clearer in our final version.
> How have the [hyperparameters] been found?
The lambda factors were found by hyperparameter tuning on the Lyft dataset. Intuitively, the selected values aim to balance each component of the reward and prevent one from dominating the others. The mu_scale, sigma_scale and the x2 scaled up range were adopted based on common sense reasoning: The points should in general fall close to the inner surface of the bounding box, leading the mu_scale to be slightly smaller than 1. And we intuitively consider x2 scale a reasonable range to encompass points associated with the object a box aims to detect. Our ablation in Supp. Table 2 further indicates that reasonable alternative choices of mu_scale would yield similar performance. We additionally ablate the standard deviation sigma_scale, and observe that it is not sensitive to the choice of value:
| Scale St. Dev. | IoU 0.5 | IoU 0.7 |
|----------------|:-------:|:-------:|
| 0.1 St. Dev. | 34.2 | 20.9 |
| 0.2 St. Dev. | 38.3 | 23.1 |
| 0.3 St. Dev. | 38.1 | 20.0 |
We would also like to highlight that we directly applied the hyperparameters we identified on the Lyft dataset to Ithaca365. Despite the differences in environment and data distribution, DRIFT attains strong performance on Ithaca365 using the hyperparameters from Lyft. This indicates that our default hyperparameters are very generalizable, and hyperparameter tuning or other human involvement may not be necessary when training DRIFT for a new domain.
> In 3.2 what is meant by "In effect, this encourages all non-persistent points ... to propose boxes"?
This is referring to the first stage of PointRCNN, which classifies each point as either foreground or background, and generates box proposals from foreground points. A focal loss is applied to the foreground/background classification. Here, we modify the classification labels used for the loss, such that a non-persistent (low PP-score) point predicted as background is still treated as foreground. A similar approach has been adopted in [1]. This modification encourages the detector to generate proposals around non-persistent points.
> Labeling based on the shape priors
Yes, DRIFT has the potential to provide a rough class labeling based on its shape priors, e.g. like what we did in the “Extension to Detection with Classes” section. It is worth noting that some classes (e.g. pedestrians vs. cyclists) can have similar sizes, and thus cannot be completely separated based on shape. Additional heuristics may be needed to improve the accuracy of the labeling.
> False negatives
As the detector gets trained for longer, it becomes better at identifying dynamic objects and is less likely to have false negatives. We report the recall at various epochs below. We also empirically find that additionally sampling boxes around non-persistent points (as a way to capture potential false negatives) does not improve the end performance (Table 6).
| Recall @ Ep. | IoU 0.5 |
|--|:--:|
| 30 | 0.47 |
| 60 | 0.51 |
| 90 | 0.53 |
| 120 | 0.56 |
> Final numbers
We provide the results below, and include discussion in ZfQG's response.
| | IoU 0.5 | IoU 0.7 |
|--|:--:|:--:|
| No Finetuning | 23.9 | 10.5 |
| DRIFT (300 ep) | 44.6 | 31.3 |
[1] You, Yurong, et al. 2022. Unsupervised Adaptation from Repeated Traversals for Autonomous Driving.
---
Rebuttal Comment 1.1:
Comment: Read the comments of the authors and other reviewers comments. I am content with the responses. | null | null | null | null | null | null |
SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models | Accept (poster) | Summary: The authors propose a sampling method for diffusion probabilistic models by solving an alternative SDE with the same marginal distribution. Approximation techniques are applied in constructing the computationally efficient solver and corrector . The results show a superior FID with less function evaluations.
Strengths: 1. The construction of the alternative SDE and the approximation techniques, including score function approximation, change of variable and Lagrange interpolation, provide a complete framework for efficient DFM generation.
2. The proposed method can achieve lower FID scores with less function evaluations which is promising.
Weaknesses: 1. The paper content seems not well organized yet.
2. In the appendix, for other methods, with different NFEs, the image content is consistent. But for the proposed method, the image content varies a lot. It might be caused by the inaccurate approximations.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Could you elaborate on why the images are not consistent for different NFEs in your method?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors did not address the limitations.
There is no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank you for taking the time to carefully review our paper, acknowledgment of our novel contributions, and the insightful questions. Below we respond to the questions:
Q1: The paper content seems not well organized yet.
A1: Thank you for your sincere advice. The paper is organized in the following order. The introduction part is contained in Section 1. In Sections 2 and 3, we introduce the background and related work on the fast sampling of diffusion models. In Section 4, we propose a family of diffusion SDEs called variance controlled diffusion SDEs which shares the same marginal probability distribution. In Section 5, we introduce our SA-Solver, which utilizes the stochastic Adams method to solve the SDEs we proposed in Section 4. In Section 6, we conduct several ablation studies and experiments to demonstrate the effectiveness of our method. We will carefully finalize our draft to make it more readable.
Q2: In the appendix, for other methods, with different NFEs, the image content is consistent. But for the proposed method, the image content varies a lot. It might be caused by inaccurate approximations.
A2: Thank you for noting such an interesting phenomenon. In contrast to inaccurate approximations, we attribute the inconsistent content under different NFEs to the stochasticity of diffusion SDEs.
Concretely, for diffusion ODEs, owing to its deterministic formulation, there is a one-to-one correspondence between starting noise and generated image, so that the same generated contents within the same starting noises under different NFEs (as shown in Figure 4). However, the one-to-one correspondence property does not hold for diffusion SDEs as it injected extra randomness during the sampling procedure. Under the same random seed, different NFEs in the setting of the stochastic sampler will sample different numbers of Gaussian noises, which is the reason the image content varies. This explains the observed inconsistent contents under different NFEs for stochastic sampling methods i.e., EDM (SDE) and our SA-Solver in Figure 4 in Appendix. | Summary: This paper extends UniPC to a stochastic manner to obtain a faster sampler called SA-Solver for diffusion models. Specifically, the authors start from the formulation of diffusion SDE and derive the SA-Predictor and SA-Corrector. Extensive experiments on CIFAR10, ImageNet, LSUN, etc demonstrate the effectiveness of the proposed SA-Solver.
Strengths: 1. It is interesting to consider sampling from the SDE instead of ODE. This opens up a new direction for the fast sampling of DPMs. I am surprised that this simple practice can improve the sampling quality so much.
2. The writing is clear and the derivation of SA-Solver is easy to understand. I especially appreciate Section 5.3, where the relationship with UniPC helps me to understand the proposed method at a glance.
3. The experiments are thorough and convincing. Most required experiments are included, such as unconditional/conditional sampling with data/noise prediction with different image resolutions.
Weaknesses: 1. It is questionable whether the novelty of SA-Solver is enough. As said in L228, SA-Solver can be viewed as an extension of UniPC with a non-zero $\tau(t)$. However, I think the idea to introduce stochasticity back to the sampling of the diffusion models also contributes to the novelty. I just feel worried that this issue might be raised by other reviewers.
2. Some experiments are lacking, such as comparing the convergence of different methods on COCO using a pre-trained text-to-image diffusion model. For now, only some qualitative results are provided. It would be better to also demonstrate some quantitative comparisons.
3. Some minor format issues: too much space below Table 1. Maybe the authors can try to make it looks better.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations have been fully discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank you for taking the time to carefully review our paper, acknowledgment of our novel contributions, and the insightful questions. Below we respond to the questions:
Q1: It is questionable whether the novelty of SA-Solver is enough.
A1: As we have claimed in lines 38-39, "adding properly scaled noise in the diffusion SDE may facilitate the quality of generated data". Thus we focus on developing a solver for diffusion SDEs in this paper, which has not been well developed in the existing literature.
Though our method originates from similar techniques as in existing methods of solving diffusion ODEs, i.e., multiple-steps and predictor-corrector. Generalizing the multiple-step method into diffusion SDEs is non-trivial. Concretely, the multi-step and predictor-corrector of diffusion SDEs are from numerical SDE, and the core idea behind them is quite different from their versions under diffusion ODEs (e.g., Ito-Taylor expansion v.s. Taylor expansion).
We also clarify the difference between SA-Solver (ours) with the existing sampling methods in Sec. 5.3, while most of them aim to accelerate the sampling of diffusion ODEs. We show that these methods are the certain case of the proposed SA-Solver.
Q2: Some experiments are lacking, such as comparing the convergence of different methods on COCO using a pre-trained text-to-image diffusion model.
A2: We test 4 diffusion samplers: DDIM, DPM-Solver, UniPC, and our SA-Solver on Stable Diffusion v1.5. Following the standard evaluation procedure, we randomly draw 30k prompts from the MS-COCO validation set and report the FID results on the generated images. The results are provided in the table below. The results show that all solvers achieve similar FID results. We attribute this to the powerful pretrained decoder, which can map a non-converged latent code to a good image sample. This phenomenon has also been observed in section 7.2 in [1].
| NFE\method | DDIM | DPM-Solver | UniPC | SA-Solver |
|-|-------|------------|-------|-----------|
| 20| 10.48 | 10.40 | 10.46 | 10.22 |
| 60 | 10.30 | 10.47 | 10.40 | 10.33 |
In [1][2], the authors compare the convergence speed of different diffusion ODE samplers by reporting the l2-distance between the sampled latent and the ground truth latent (approximated by 999 steps DDIM) under the same random seed and initial noise. However, this examination method does not fit stochastic samplers since even given the same initial noise, the intrinsic stochastic property will guide the stochastic samplers to different sampled latent.
Q3: Some minor format issues: too much space below Table 1. Maybe the authors can try to make it looks better.
A3: Thank you for your sincere advice. We will revise it in the final version.
[1] Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C. and Zhu, J., 2022. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models.
[2] Zhao, W., Bai, L., Rao, Y., Zhou, J. and Lu, J., 2023. UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I will keep my score. | Summary: The paper proposed a stochastic Adam solver for solving diffusion SDEs in an efficient way with a convergence guarantee. Authors adapt the stochastic Adam from numerical literature and use Lagrange interpolation to predict unknown terms. They show strong convergence for both predictor and corrector. Numerical experiments are done to verify their claims.
Strengths: 1. By deriving an explicit form of the solution of the SDE, authors are able to utilize stochastic Adam with Lagrange interpolation to find an efficient diffusion SDE solver,
2. Theoretical analysis is well-written.
3. Numerical experiments show improvements compared to other solvers, especially the diffusion ODE solvers.
4. It sort of unifies previous methods in some sense.
Weaknesses: 1. My main concern is, from Figure 2, it seems SA-solver only outperforms other methods within a certain range of NFE. This "optimal" range looks very different for different data sets. In reality, if I use SA-solver for the sake of doing fewer function evals, how do I know when it outperforms other methods?
2. I don't think I fully understand how to choose the parameter tau(t) in the experiments. Please comment on this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank you for taking the time to carefully review our paper, acknowledgment of our novel contributions, and the insightful questions. Below we respond to the questions:
Q1: I don't think I fully understand how to choose the parameter tau(t) in the experiments. Please comment on this.
A1: In our experiment, we use a constant function $\tau(t) = \tau, \forall t \in [0, T]$. In section 6.1, we use $\tau = 1$ for the comparison of the data-prediction model and noise-prediction model. In section 6.2, we vary $\tau$ from $\{0.0, 0.2, \cdots, 1.6\}$ to explore the effect of different NFE and $\tau$ to the result. In section 6.3, in relatively small NFEs, we set a proper small $\tau$ value and use $\tau = 1$ over 20 NFEs.
In fact, we want to clarify that $\tau = 1$ for SA-Solver is generally not the optimal setting. We uniformly use it for our method in Tables 1 and 2 over different datasets because we aim to make a fair comparison with other methods (no extra-tuned hyperparameters).
To further address the concern about choosing the proper $\tau(t)$, we conduct an ablation study in section 6.1 (see Figure 1) by varying the value of $\tau$. Our empirical results suggest that increasing $\tau$ with NFEs improves the FID.
Moreover, we empirically observe that using a non-constant $\tau(t)$ yields better results than the one presented in this paper. Thus we suggest using it in practice. The strategy of such improved $\tau(t)$ is taking $\tau(t)=\tau$ in the interval $[t_{min}, t_{max}]$, and $\tau(t)=0$ otherwise, where $t_{min}$ and $t_{max}$ are selected as in [1] Below are the results under improved $\tau(t)$.
| method\NFE on CIFAR 10 | 11 | 15 | 23 | 31 | 47 | 63 | 95 |
|---------------------------|----------|----------|----------|----------|----------|----------|----------|
| SA-Solver(vanilla) | 7.49 | **4.84** | 4.04 | 3.41 | 3.18 | 3.24 | 3.17 |
| SA-Solver(improved) | 6.46 | 4.91 | **3.77** | **3.40** | **2.92** | **2.74** | **2.63** |
| Best over baseline method | **6.41** | 5.01 | 4.04 | 3.82 | 3.59 | 3.36 | 3.06 |
| method\NFE on ImageNet64 | 15 | 23 | 31 | 47 | 63 | 95 |
|---------------------------|----------|----------|----------|----------|----------|----------|
| SA-Solver(vanilla) | 3.65 | 3.08 | 2.77 | 2.40 | 2.30 | 2.22 |
| SA-Solver(improved) | **3.41** | **2.61** | **2.23** | **1.95** | **1.88** | **1.81** |
| Best over baseline method | 3.49 | 2.83 | 2.75 | 2.72 | 2.44 | 2.22 |
Q2: My main concern is, from Figure 2, it seems SA-solver only outperforms other methods within a certain range of NFE. This "optimal" range looks very different for different data sets. In reality, if I use SA-solver for the sake of doing fewer function evals, how do I know when it outperforms other methods?
A2: Thank you for pointing out this. Under the improved $\tau(t)$ as in the A1, our method almost consistently beats the other baseline methods as presented in the above Tables.
[1] Karras, T., Aittala, M., Aila, T. and Laine, S., 2022. Elucidating the design space of diffusion-based generative models. | Summary: The paper proposes a new solver for diffusion SDEs, termed SA-Solver, combining the ideas of a predictor-corrector scheme and the stochastic Adam solvers. The predictor and corrector utilize the Lagrange polynomials for extrapolation to lower the approximation error at future time stamps. Experimentally, SA -Solver improves over previous ODE and SDE across a wide range of image generation benchmarks.
Strengths: - The considered problem of accelerating SDE solvers is of great practical value in the field, as SDE solver often delivers better sample quality than ODE solvers, but are hindered by their slow sampling speed.
- Adapting the idea in Stochastic Adams is interesting, offering higher-order convergence with Lagrange polynomial extrapolation.
- The fusion of the PC sampler and Lagrange polynomial yields superior empirical results on diffusion SDE, tested on a range of dataset resolutions from 32x32 to 256x256. It's nice to see the methods scale to different resolutions and architectures.
Weaknesses: - Throughout the paper, the authors mention several times that empirically the quality of data generated by SDE has a better upper limit. I think some theoretical analysis in the main text to make the paper more self-contained. A concurrent [1] provides some theoretical arguments and comparisons between SDE and ODE, showing that the stochasticity in SDE can reduce overall sample errors. I think it would be beneficial to discuss them to better motivate the idea of the paper.
- The authors didn't provide detailed discussions of the benefit of introducing $\tau(t)$ (at the end the author set $\tau(t)=1$ in experiments). Since the authors observe the benefit of larger $\tau$ on larger NFE, why not use a varying $\tau$ in Fig.2 based on NFE? In addition, the benefit of a larger $\tau$ on larger NFE seems predictable based on the theory in [1].
- The authors did some comparisons of epsilon-prediction and data prediction models. It seems that EDM [2] uses interpolation between these two by pre-conditioning. Does it offer better results?
- Unlike the deterministic path in ODE, the randomness in SDE can change the sample trajectories quite a bit. For example, the trajectories of generating a dog can stray into generating a cat. I wonder if this is an issue for using Lagrange polynomials in SDE, since the preceding predictions could be misleading. If not, could the author provide some intuitions? In addition, the authors mention that "predictor step 2 and corrector step 1 is the most stable setup". such small steps could be caused by the issue I raise.
- The paper separately gives the convergence order of predictor and corrector. Is it possible to give the theoretical order after combing the predictor and corrector?
- I think the author didn't use the "EDM VE" model for CIFAR-10. In Table 2 of EDM [2], the unconditional VE (in config F, EDM VE) can achieve an FID of 1.98 using only 35 NFE, as opposed to the >3 FID in Fig 2 in the current paper. So I guess the author use their VE model in config A (baseline VE). Could the author compare different methods using the VP model (config A), as in [1] and [2], because the VP model is more commonly used in practice and offers better results?
- The stochastic sampler in the concurrent work [1] obtains an FID ~ 1.8 using less than 100 NFE on ImageNet-64 with Pixel DPM, as opposed to the > 2.1 FID in Fig 2 of the current paper. I wonder if one could combine the multi-step idea in this paper with [1] to obtain further improvements.
- Could the author also provide quantitative results on Stable Diffusion experiments, to better showcase the advantage of SA-Solver? Current visualized images do not seem compelling.
### Minors
- line 225, DPM-solver++ and UniPC are *ODE* solvers which would not naturally be special cases of the proposed *SDE* solver.
[1] Restart Sampling for Improving Generative Processes, Xu et al, https://arxiv.org/abs/2306.14878
[2] Elucidating the Design Space of Diffusion-Based Generative Models, Karras et al, https://arxiv.org/abs/2206.00364
# Post rebuttal
Thanks the authors for the rebuttal. After another pass of the paper, it occurred to me that that paper is a direct application of the ideas (variation-of-parameter, or exponential integrator, and Langraian polynomial for approximation) in [1] to SDE samplers. The same formula of the exponential integrated version of SDE (Eq.5) has been proposed in prior works [1] (see their eq.17, arXiv version 1). In addition, [1] also uses a Lagrange polynomial to extrapolate. Due to it's straightforward extension of prior works, I will keep my score unchanged.
[1] Zhang, Qinsheng, and Yongxin Chen. "Fast sampling of diffusion models with exponential integrator." arXiv preprint arXiv:2204.13902 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I copy-and-paste some questions from above:
- Since the authors observe the benefit of larger $\tau$ on larger NFE, why not use a varying $\tau$ in Fig.2 based on NFE?
- I wonder if this is an issue for using Lagrange polynomials in SDE, since the preceding predictions could be misleading. If not, could the author provide some intuitions? In addition, the authors mention that "predictor step 2 and corrector step 1 is the most stable setup". such small steps could be caused by the issue I raise.
- Is it possible to give the theoretical order after combing the predictor and corrector?
- Could the author compare different methods using the VP model (config A), as in [1] and [2], because the VP model is more commonly used in practice and offers better results?
- Could the author also provide quantitative results on Stable Diffusion experiments, to better showcase the advantage of SA-Solver?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank you for taking the time to carefully review our paper, acknowledgment of our novel contributions, and the insightful questions. Below we respond to the questions:
Q1: I think some theoretical analysis in the main text to make the paper more self-contained. A concurrent [1] provides some theoretical arguments and comparisons between SDE and ODE.
A1: Thank you for pointing out the interesting concurrent work [1], we will add a discussion to it in our revised version. In this work, the authors clarify the advantage of SDE over ODE by proving it has a lower upper bound on sampling error in Wasserstein-1 distance. An upper bound of sampling error in terms of KL divergence for variance-controlled SDE is also provided in our supplementary material, which indicates the advantage of SDE over ODE in KL divergence.
Q2: The authors didn't provide detailed discussions of the benefit of introducing $\tau(t)$. Why not use a varying $\tau$ in Fig.2 based on NFE?
A2: Thank you for pointing out it. In fact, $\tau = 1$ for SA-Solver is not the optimal setting, we use it because we aim to make a fair comparison with other methods (no extra-tuned hyperparameters).
We evaluate our SA-Solver with non-constant $\tau(t)$ inspired by EDM [2] and tuned $\tau$ in $0, 0.2\cdots,1.6, 1.8$, and we observe an improved performance. Concretely, a $\tau(t)=\tau$ in the interval $[t_{min}, t_{max}]$, and $\tau(t)=0$ otherwise, where $t_{min}$ and $t_{max}$ are selected as in [2].
|method\NFE on CIFAR10(EDM baseline-VE)|11|15|23|31|47|63|95|
|-|-|-|-|-|-|-|-|
|SA-Solver(vanilla)|7.49|**4.84**|4.04|3.41|3.18|3.24|3.17|
|SA-Solver(improved)|**6.46**|4.91|**3.77**|**3.40**|**2.92**|**2.74**|**2.63**|
|method\NFE on ImageNet64(ADM)|15|23|31|47|63|95|
|-|-|-|-|-|-|-|
|SA-Solver(vanilla)|3.65|3.08|2.77|2.40|2.30|2.22|
|SA-Solver(improved)|**3.41**|**2.61**|**2.23**|**1.95**|**1.88**|**1.81**|
The first row uses the original setting in the paper, which contains constant $\tau(t)$, not tuned $\tau$.
The second row uses the improved setting, which contains the piecewise constant $\tau(t)$, tuned $\tau$.
Q3: The authors did comparisons of epsilon and data prediction models. Does EDM's interpolation offer better results?
A3: It is an interesting point! In Section A.2.4, we show that the data model injects a smaller variance compared with the noise model. Similarly, the injected noise of the interpolation model is larger than the one of the data model. Thus we speculate interpolation may not bring extra benefits.
Q4: I wonder if this is an issue for using Lagrange polynomials in SDE since the sample trajectories change and the preceding predictions could be misleading.
A4: The theoretical convergence result indicates at least the stochastic multistep method will converge in the distribution sense.
Q5: Is it possible to give the theoretical order after combing the predictor and corrector?
A5: The convergence result of the ODE predictor-corrector method is well established, e.g. see [5]. To the best of our knowledge, the theoretical convergence result of the stochastic multi-step method with p-c is not known yet.
Q6: So I guess the author use their VE model in config A (baseline VE).
A6: Yes. We will revise the expression to 'EDM baseline-VE' to avoid misunderstanding.
Q7: Could the author compare different methods using the VP model (config A)?
A7: We provide the results of the baseline-VP CIFAR10 model as below.
|method\NFE |11|15|23|31|47|63|
|-|-|-|-|-|-|-|
|DDIM|17.07|11.57|7.33|5.68|4.41|3.89|
|DPM-Solver|**6.31**|**4.72**|3.46|3.28|3.07|2.99|
|UniPC|7.05|5.59|3.08|2.88|2.88|2.88|
|EDM-ODE|18.41|6.52|3.52|3.10|2.99|2.95|
|EDM-SDE|29.90|10.21|4.85|3.77|3.08|2.84|
|SA-Solver|7.05|5.59|**3.03**|**2.70**|**2.50**|**2.39**|
Q8: The stochastic sampler in the concurrent work [1] obtains an FID ~ 1.8 using less than 100 NFE on ImageNet-64, as opposed to the $>$ 2.1 FID of the current paper. I wonder if one could combine the two ideas to obtain further improvements.
A8: Thank you for your advice! We think the idea in [1] is interesting and the result is promising. We first want to point out that we have a typo in line 271, and it causes a misleading. We actually use EDM baseline-VE for CIFAR10 while ADM[4] for ImageNet64. We will correct this typo in the revised version.
Our results for ImageNet64 are based on ADM which is slightly weaker than EDM, i.e., 1.55 v.s. 1.36 for SOTA FID. To fairly compare, we conduct our method with the non-constant $\tau(t)$ in A2 on EDM ImageNet64 model and provide the results below. The idea that combines the two papers can be explored in the future.
|method\NFE|39|67|99|165|
|-|-|-|-|-|
|Restart|2.38|1.95|1.71|1.51|
|SA-Solver|**1.80**($\tau=1.0$)|**1.58**($\tau=1.4$)|**1.49**($\tau=1.8$)|**1.44**($\tau=2.2$)|
Q9: Could the author also provide quantitative results on Stable Diffusion experiments, to better showcase the advantage of SA-Solver? Current visualized images do not seem compelling.
A9: We test on Stable Diffusion v1.5. Following the standard evaluation procedure, we randomly draw 30k prompts from the MS-COCO validation set. The results show that all solvers achieve similar FID results. We attribute this to the powerful pretrained decoder, which can map a non-converged latent code to a good image sample. This phenomenon has also been observed in section 7.2 in [3].
|NFE\method|DDIM|DPM-Solver|UniPC|SA-Solver|
|-|-|-|--|-|
|20|10.48|10.40|10.46|10.22|
|60|10.30|10.47|10.40|10.33|
[1]Xu, Yilun, et al. "Restart Sampling for Improving Generative Processes."
[2]Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models."
[3]Lu, Cheng, et al. "Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models."
[4]Dhariwal, Prafulla, and Alexander Nichol. "Diffusion models beat gans on image synthesis."
[5]Gragg, William B., and Hans J. Stetter. "Generalized multistep predictor-corrector methods." | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the problem of fast sampling of diffusion models. Since standard DDPM sampling is slow, this is a very important topic of late, with many competing methods. Many methods reformulate as a ODE solving problem, which makes it easier to do few-step sampling. However, it has been noted that sampling from the original SDE formulation (as opposed to ODE) can lead to better samples if there is budget for taking many diffusion steps. Therefore, this paper studies the problem of accelerating sampling of the SDE.
The method, named SA-Solver is a predictor corrector method. It incorporates some recent findings, such as semi-linearity, but mainly uses the method of Lagrange interpolations. More specifically, the predictor step involves (uniquely) fitting a polynomial of degree s-1 using s predictions and reading off the polynomial value at desired points. Finally, a correction step is incorporated that plug in the predicted value at the new timestep (t+1) back into the formula to choose a new predicted value (similar in spirit to heun's method). The paper draws connections to DDIM (deterministic version of 1-step SA-Predictor), DPM-Solver (deterministic version of 2-step SA-Predictor), and UniPC (deterministic version of SA_Solver)
The paper shows sampling results on cifar10, imagenet64, and imagenet256 (latent), demonstrating best FID scores (conditioned on the same number of NFEs) when using more than 30 NFEs.
Strengths: The problem being studied is an important one, as sampling speed is arguably the biggest weakness of diffusion models (and diffusion models are being used in a huge number of applications right now).
The experimental results seem quite promising, showing better FID scores than DDIM/DPMSolver, which are very popular baselines.
Weaknesses: Seems to be missing quantitative evaluation (e.g. CLIP score on stable diffusion) for text-to-image tasks. Also, evaluating on different domains besides image would improve the empirical results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How much of the performance is due to the predictor/corrector approach, versus the lagrange interpolation? An ablation study in this regard would improve the paper and our understanding of the method.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank you for taking the time to carefully review our paper, acknowledgment of our novel contributions, and the insightful questions. Below we respond to the questions:
Q1: Seems to be missing quantitative evaluation (e.g. CLIP score on stable diffusion) for text-to-image tasks.
A1: In fact, for text-to-image tasks, papers on fast samplers rarely report the FID and CLIP scores, as all solvers achieve similar FID results [1][2]. These works attribute this to the powerful pretrained decoder, which can map a non-converged latent code to a good image sample [1][2]. We test 4 diffusion samplers: DDIM, DPM-Solver, UniPC, and our SA-Solver on Stable Diffusion v1.5. Following the standard evaluation procedure, we randomly draw 30k prompts from the MS-COCO validation set and report the FID results on the generated images. The results are provided in the table below, which shows similar conclusions as the previous works.
| NFE\method | DDIM | DPM-Solver | UniPC | SA-Solver |
|------------|-------|------------|-------|-----------|
| 20 | 10.48 | 10.40 | 10.46 | 10.22 |
| 60 | 10.30 | 10.47 | 10.40 | 10.33 |
Q2: Also, evaluating on different domains besides image would improve the empirical results.
A2: Thanks for your valuable advice! But we are not familiar with generation tasks in other domains. We will consider adding some experiments in other domains in the future.
Q3: How much of the performance is due to the predictor/corrector approach, versus the Lagrange interpolation? An ablation study in this regard would improve the paper and our understanding of the method.
A3: This is an interesting and valuable question. To explore this, we conduct an ablation study on the effect of Lagrange interpolation and predictor-corrector on the CIFAR10 dataset as follows. We use EDM baseline-VE pretrained checkpoint. Concretely, we vary the number of predictor steps and meanwhile conduct them with/without corrector to separately explore the effect of the two components. As can be seen, both Lagrange interpolation (Predictor 1-steps only v.s. Predictor 3-steps only) and predictor-corrector (Predictor 1-steps only v.s. Predictor 1-steps, Corrector 1-step, and Predictor 3-steps only v.s. Predictor 3-steps, Corrector 3-steps) improve the performance of our sampler.
| method\NFE and $\tau$ | 15 0.4 | 23 0.8 | 31 1.0 | 47 1.4 |
|------------------------------------------|----------|----------|----------|----------|
| Predictor 1-step only | 13.76 | 12.44 | 11.72 | 14.67 |
| Predictor 1-step with Corrector 1-step | 8.49 | 6.87 | 6.13 | 6.75 |
| Predictor 3-steps only | 5.30 | 3.93 | 3.52 | 2.98 |
| Predictor 3-steps with Corrector 3-steps | **4.91** | **3.77** | **3.40** | **2.92** |
[1] Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C. and Zhu, J., 2022. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models.
[2] Zhao, W., Bai, L., Rao, Y., Zhou, J. and Lu, J., 2023. UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the response, the ablation study is excellent. I stand by my recommendation of acceptance.
A1: for text-to-image tasks, papers on fast samplers rarely report the FID and CLIP scores, as all solvers achieve similar FID results [1][2]. These works attribute this to the powerful pretrained decoder, which can map a non-converged latent code to a good image sample
Having a powerful decoder is a property of latent diffusion (as opposed to pixel diffusion), and is not an issue of text-to-image vs unconditional generation right? In other words, this statement seems to also imply Imagenet 256x256 with latent DPM is easy?
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. We believe that two factors are present, influencing the FID of sd v1.5 on MS-COCO. One is the encoder/decoder architecture and the other is the text-to-image tasks. The rationale behind the latter aspect has not been definitively established yet. However, we hypothesize that improved sample quality doesn't necessarily guarantee that the sample will resemble the COCO sample under the same prompt. Please let us know if you have any other questions. Thanks again for your valuable review! | Summary: The paper presents a multistep SDE solver for diffusion models instead of ODE solvers. The main goal is to have diverse and high-quality samples while reducing the number of solver step required. To do this, the paper proposes a new SDE that includes an additional term, ensuring the marginal distribution unchanged. Based on this new SDE, the paper adopts stochastic Adam method, introducing two key component: SA-predictor, and SA-corrector. The convergences of these two are provided. Experiments are conducted in different scenario including varying stochastic noise scale, different models. The results support the claims on the reduction in NFEs.
Strengths: The paper has the following strong points:
- The experiment shows state-of-the-art result in term of FID given a limited NFEs. This highlights the effectiveness of the proposed method in achieving impressive results with efficient resource utilization.
- It provides theoretical studies on the convergence of proposed method, offering valuable insights
Weaknesses: The main weaknesses I find in this paper are:
- In term of the contribution to difussion model research, the proposed method may be considered incremental. This is because the key factor leading to good result, such as predictor-corrector, multi-step have been well-established in the existing literature.
- The paper presentation can be improved, in particular the exposure of stochastic Adam methods. I would be good to dedicate a small paragraph or section explicitly discussing ths method to provide readers with clear understand of how it is employed in the proposed approach.
- The paper lacks clarity in establishing the connection between the variance-control SDE and the use of stochastic Adam methods. The discussion should be more thorough and elaborate, especially in addressing why setting $\tau=1$ (falling back to DDPM) is the best choice in many cases. Then, stochastic Adam significantly contributes to the model's overall performance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
How many of predictor step $s_p$ and corrector step $s_c$ is used?
Minor points,
- Line 199, please add the notation stepsize $h$
- Eq (13), please add the description for the Lagrange basis $l(t)$.
- Caption of Fig. 1, "scholastic" -> "stochastic"
- There are some typos in Appendix
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper mentioned a current limitation is that the optimal $\tau(t)$ is unknown, which I agree. There many factors affecting this parameters including NFE, dataset and approximated score function.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your valuable comments and carefully reviews. Below are our responses to the raised questions:
Q1: The proposed method may be considered incremental. This is because the key factor leading to good results, such as predictor-corrector, and multi-step have been well-established in the existing literature.
A1: The multi-step solver and predictor-corrector method have been well-established and studied in ODE solvers of diffusion models, e.g. [1][2][3]. However, as we have claimed in lines 38-39, "adding properly scaled noise in the diffusion SDE may facilitate the quality of generated data". Thus we focus on developing a solver for diffusion SDEs in this paper, which has not been well developed in the existing literature.
Back to your concern, although the mentioned techniques (e.g., multiple-step method) have been used in diffusion ODEs, generalizing it into diffusion SDEs is non-trivial. Concretely, the core technique of the multi-step method of diffusion SDEs is from numerical SDE (e.g., Ito-Taylor expansion), which is quite different from the one used in diffusion ODEs.
Q2: The paper presentation can be improved, in particular the exposure of stochastic Adam methods. I would be good to dedicate a small paragraph or section explicitly discussing ths method to provide readers with a clear understanding of how it is employed in the proposed approach.
A2: Thank you for your sincere advice. We will carefully revise the order of our content and make it more readable and make a discussion about the stochastic Adam methods right after line 168 as you suggested. And we consider adding a section to explicitly introduce the stochastic Adams method in the appendix to help readers better understand the method.
Q3: The paper lacks clarity in establishing the connection between the variance-control SDE and the use of stochastic Adam methods.
A3: Stochastic Adams method is a numerical method to solve SDEs and variance-controlled SDE is a specialized SDE. The connection has been clarified at the beginning of Section 5.
Q4: The discussion should be more thorough and elaborate, especially in addressing why setting $\tau = 1$
(falling back to DDPM) is the best choice in many cases.
A4: Thank you for pointing out it. In fact, we want to clarify that $\tau = 1$ for SA-Solver is generally not the optimal setting, as in line 259. We uniformly use it over different datasets because we aim to make a fair comparison with other methods (no extra-tuned hyperparameters). As can be seen in the ablation study to $\tau$ in Tables 6, 7, 8, and 9 in our supplementary material, SA-Solver with larger $\tau$ (even larger than 1) exhibits improved results, especially under larger NFEs.
To further address your concern, we evaluate our SA-Solver with improved (compared with results in this paper) non-constant $\tau(t)$ inspired by EDM [4]. Concretely, a $\tau(t)=\tau$ in the interval $[t_{min}, t_{max}]$, and $\tau(t)=0$ otherwise, where $t_{min}$ and $t_{max}$ are selected as in [4]. The results are summarized below, and we will add them to the revised version.
| method\NFE on CIFAR10(EDM baseline-VE) | 11 | 15 | 23 | 31 | 47 | 63 | 95 |
|---------------------------|----------|----------|----------|----------|----------|----------|----------|
| SA-Solver(vanilla) | 7.49 | **4.84** | 4.04 | 3.41 | 3.18 | 3.24 | 3.17 |
| SA-Solver(improved) |**6.46** | 4.91 | **3.77** | **3.40** | **2.92** | **2.74** | **2.63** |
| method\NFE on ImageNet64(ADM) | 15 | 23 | 31 | 47 | 63 | 95 |
|---------------------------|----------|----------|----------|----------|----------|----------|
| SA-Solver(vanilla) | 3.65 | 3.08 | 2.77 | 2.40 | 2.30 | 2.22 |
| SA-Solver(improved) | **3.41** | **2.61** | **2.23** | **1.95** | **1.88** | **1.81** |
The first row uses the original setting in the paper, which contains constant $\tau(t)$, not tuned $\tau$.
The second row uses the improved setting, which contains the piecewise constant $\tau(t)$, tuned $\tau$.
Q5: How many of predictor step $s_p$ and corrector step $s_c$ is used?
A5: As clarified in line 239, we use $s_p = 2$ and $s_c = 1$ in the experiment part as they are observed stable in practice.
[1] Liu, L., Ren, Y., Lin, Z. and Zhao, Z., 2022. Pseudo numerical methods for diffusion models on manifolds.
[2] Zhao, W., Bai, L., Rao, Y., Zhou, J. and Lu, J., 2023. UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models.
[3] Li, S., Liu, L., Chai, Z., Li, R. and Tan, X., 2023. ERA-Solver: Error-Robust Adams Solver for Fast Sampling of Diffusion Probabilistic Models.
[4] Karras, T., Aittala, M., Aila, T. and Laine, S., 2022. Elucidating the design space of diffusion-based generative models. | null | null | null | null |
Norm-guided latent space exploration for text-to-image generation | Accept (poster) | Summary: This paper proposes a novel method for interpolating between two seeds and demonstrates that it defines a new non-Euclidean metric that takes into account a norm-based prior on seeds. This paper describes a simple yet efficient algorithm for approximating this metric and using it to further define centroids in the latent seed space, which helps generate rare concept images and leads to state-of-the-art performance on few-shot and long-tail benchmarks.
Strengths: - This paper first discusses the property of the seed, that is, the relationship between the norm of the seed and the quality of the generated image, which provides good theoretical support for the proposed new non-Euclidean metric.
- The newly proposed non-Euclidean metric combined with the centroid method has a good effect according to the experimental results and has been optimized to a certain extent for problems such as rare concept generation and long tail training.
- From the seed level, the paper investigates the text-image generation problem of the diffusion model, and verify the feasibility of controlling the image generation from the seed level.
Weaknesses: - In the early stage, the norm problem of seed was elaborated and verified too much, but the core path optimization and centroid method did not elaborate enough.
- Judging from the experimental results of the pictures in the article, the method in this paper does not seem to show a particularly great advantage, especially after adding the seed-selection method.
- The two interpolation methods compared in this article are very basic. There are more non-linear interpolation methods to compare, and the results after interpolation seem to be inconsistent with the actual results. In my understanding (also my experiment testing), no matter what kind of Gaussian noise the seed is, SDM can generate a relatively reasonable image, rather than an unnatural noisy image like the ones presented in Figure 1(left).
- From the point of view of experimental design, the seed select method is also a key part, such as Figure 5, but it has not been explained in detail.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - There are some formatting errors in the article, such as the picture in the upper right corner of page 4 without icons and annotations.
- I can not understand the interpolation path of 2D space. Hope to have more explanations of 2D space, and how to get the result such as in Figure 1 (right).
- Other concerns have already been mentioned in Weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for finding our approach effective with good theoretical support. We address your comments below.
#### **Q1: Core path optimization and centroid method did not elaborate enough.**
**A1:** We value the reviewer’s feedback to improve our paper. Due to lack of space, we provide more details on this in Sec. C in the Supplementary material. We will elaborate on this section in the final version with additional details.
#### **Q2: The method lacks an advantage with seed-selection, based on results.**
**A2:** Please refer to the shared response (A1) provided in the shared answer for all reviewers. NAO has two main advantages over SeedSelect: It is 10x faster, and it improves quality as measured by an improvement in terms of FID and accuracy. See Table2 for the numbers.
#### **Q3: The two interpolation methods compared in this article are very basic**
**A3:** We compared to SoTA approaches in latent space interpolation, and agree that smarter methods can be used. However, we argue that being aware of the latent space structure plays a significant role, much more than using smarter interpolation methods. The key factor that was so far neglected in latent space interpolation is being aware of the distribution of samples in that space. This is the main contribution of this paper. We are open to evaluating novel methods upon request and appreciate the reviewer's observation, which we acknowledge as a potential avenue for future exploration.
#### **Q4: No matter what kind of Gaussian noise the seed is, SD can generate a relatively reasonable image.**
**A4:** Thank you for your comment, however, we would like to clarify that this may not be entirely accurate. SD generates reasonable images when the norm of the seeds is close to the mode of the chi-distribution (L 115-123 in the main paper). Randomly sampling from a Gaussian distribution and applying current interpolation techniques results, most of the time, with visually appealing images because the seed norms possess this property. However, in practical scenarios with **real images**, the properties of seeds obtained from inversions may not always be favorable, leading to the failure of current interpolation methods. Our approach considers this prior knowledge and selects seeds that allow SD to generate plausible results.
#### **Q5: [1] was not been explained in detail.**
**A5:** Indeed, [1] plays a crucial role in rare-concept generation. However, it is important to note that our approach does not depend on SeedSelect and operates independently. Nevertheless, in the final version, we will provide a more elaborate explanation of [1] and offer additional details about their methodology.
#### **Q6: 2D space examples and Figure 1 are difficult to understand.**
**A6:** We appreciate the reviewer's valuable feedback and are grateful for their efforts to improve our paper. In these figures, the colors correspond to the log-likelihood of the chi distribution. The black points represent real images, while the colored points represent interpolation paths/centroids between these real images. Our approach prioritizes points with high log-likelihood, leading to visually appealing results when using SD. Conversely, other methods disregard the inherent structure of the seed space, resulting in poor-quality images. We acknowledge the need for additional clarification on these figures and will provide more detailed explanations in the text for the final version of the paper and the camera-ready version.
#### **Q7: Formatting errors and typos.**
**A7:** Thank you. Will be fixed in the camera-ready version.
#### **References**
[1]. Samuel et al. (2023), "It's all about where you start: text to image generation with seed selection"
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: Thanks to the authors for the response and additional clarifications of my questions. I understand that the significance of choosing two basic methods is to explore the distribution of the seed space. In addition, I accept the author's explanation that SD cannot generate visual images usually, and I hope that a brief explanation can be given in the final version. I have updated my rating accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for your review
Comment: Dear reviewer,
Thank you for your support, for the productive discussion and for the insightful feedback that helped us improve the paper!
We will provide the explanation in the final version as suggested. | Summary: The paper observed that the seed (noise) for the trained diffusion model has a property that the norm of the seed, which follows the $\chi$ distribution, is concentrated around a certain positive number, $\sqrt{d}$ where $d$ denotes the seed dimension. Based on this observation, the paper proposed a way to explore the seed space (e.g. interpolation, centroid) using Norm-Aware Optimization (NAO), which defines the objective function as likelihood maximization in the seed space. The paper applied seed space exploration with NAO to generate rare concepts and augment semantic data for few-shot classification and long-tail learning.
Strengths: - The paper clearly stated contributions with comparisons with existing works on latent space exploration.
- The proposed method relies on the inherent structure of latent space, which is defined as the normal distribution, so could be used generally under diffusion model literature.
- Experiments are well-designed and easy to follow.
Weaknesses: - Some experimental conditions (e.g. number of piece-wise linear paths) are unclear.
- Although the proposed method uses multiple approximations, the paper does not provide analysis or experiments on the accuracy of the approximation.
- Comparison for the rare-concept generation using centroid estimations seems not fair. Compared baseline is initialized randomly, but the proposed method is initialized with the Euclidean centroid.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - It is unclear how many piece-wise linear paths are used to estimate (1) in experiments. Also, there is no discussion or analysis on $\delta$. If one set small $\delta$, the objective function (2) would be harder since the number of variables increases. If one sets large $\delta$, the approximation will be inaccurate. The paper mentioned that they constrain consecutive path points to be close (lines 161-162), but the corresponding description is unclear. Especially, in line 212, what is the meaning of the constraints, $c(x)\leq0$? There is no definition of the function $c(\cdot)$ and the input argument $x$.
- For the centroid estimation, there are at least two approximation gaps: one from discretizing the line-integral of log-likelihood (2) and the other from sub-optimal solution for (2). However, there is no analysis of the accuracy of the proposed distance function and centroid estimation, except for empirical performance whose accuracy seems to be dominated by the Euclidean centroid initialization.
- For the rare-concept generation, the paper claims that SeedSelect with the initial point found by NAO-centroid achieves faster and better generation. However, NAO-centroid also initializes its centroid with the Euclidean centroid of inversion points whereas SeedSelect uses random initialization. For a fair comparison, the runtime and performance of the SeedSelect initialized with the Euclidean centroid should be compared. Note that the Euclidean centroid initialization of the NAO-centroid is “to speed up convergence” (in line 201).
- In Table 4, CIFAR-FS $T_{Opt}$ might be 21 sec, not 21 min. And there is no unit for miniImageNet $\bar{T}_{Opt}$.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors describe the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback, acknowledging the broad usability of our approach and the well-designed, easy-to-follow experiments. We address your comments below.
#### **Q1: It is unclear how many piece-wise linear paths are used.**
**A1:** For few-shot learning benchmarks we optimized paths with a length of 200 points between the centroid of the k-shot training samples and the samples themselves (see line 108 suppl. material). For long-tail learning, we optimized paths ranging in length from 10 to 200, adjusting based on the available number of samples for each class (line 121 suppl. material). Due to lack of space we present more details in Appendix F of the supplementary material.
#### **Q2: What is the constraint c(x)≤0?**
**A2:** $c(x)$ corresponds to the constraint in Eq. 2, namely $|x_i-x_{i-1}| \leq \delta$. We formulate this constraint as $c_{i}(x) = |x_i-x_{i-1}| -\delta \leq 0$. This constraint is part of the integral discretization (into sum of finite elements) in Eq. 2. In the implementation we enforce this constraint using a soft penalty term with ReLU function; that is ReLU($c(x)$) (as described in line 212). Now there is a penalty for positive $c(x)$ (when the constraint is not satisfied). We apologize for the brevity in lines 211-212. We will elaborate on the discretization description of this equation in suppl. material and make sure $c(x)$ is clearly defined in text in the revised version.
#### **Q3: Missing analysis for delta.**
**A3**: We appreciate the reviewer's valuable feedback and their contribution to improving our paper. In our experiments, we set $\delta$ to be $||z_1 - z_2||/n$, where $z_1$ and $z_2$ are seed inversions of real images ($x_1$ and $x_2$), and $n$ is the number of interpolation points to be optimized. In response to the reviewer's comment, we conducted an in-depth analysis to investigate the impact of different delta values on the FID of images generated along the interpolation path. Figure G2 (see pdf attached to the general response to all reviewers) presents the results of this analysis. The analysis is done with $n=100$ points. The findings reveal that excessively low or high delta values adversely affect the FID, as they either constrain the points too closely together, causing overlap, or spread them too far apart, leading the path into low-likelihood regions.
#### **Q4: Analysis of the accuracy of the proposed distance function.**
**A4:** Indeed solving the optimization problem in Equation (2) provides only an approximation of its continuous counterpart in Equation (1). We agree that there are two main error components: (i) approximation error of the discrete integral compared to the continuous one; (ii) the optimization error when optimizing Equation 2. This is an important observation, and we will include a discussion of it in the paper.
With regards to (i), since $\log P(x)$ is smooth, we expect the minimizer of Equation (1) to be smooth as well and this allows piecewise linear paths to approximate it with arbitrarily small error by using enough point in Equation (2).
Regarding (ii), we refer to this concern in lines 213-214. As is typically done in deep learning, we employ first-order optimization methods to optimize a non-smooth function. Therefore, analyzing the error and convergence is a very difficult problem that is beyond the scope of this paper. Based on our experimental results and 2D visualizations, we believe that our optimization process converges to a satisfactory solution. From a practical standpoint, we have observed that although the numerical solutions may not be exact, the FID measures, which indicate the quality of the generated images, consistently demonstrate high quality. This is also evident from the example images displayed in Figure 3 of the main paper and Figure S3 in the supplementary material.
#### **Q5: Comparison with SeedSelect initialized with the Euclidean centroid.**
**A5:** We indeed compare our results with SeedSelect initialized with Euclidean centroid and also with the Normalized Euclidean centroid, for fair comparison. This is shown in Table 1 (main paper, Page 7, L3,5). The results indicate that the performance improvement achieved by NAO is not solely attributed to the Euclidean initialization, but rather to the optimization minimization problem formulated in Equation (2) of our paper. Actually, initiating centroids with methods other than NAO, hurts SeedSelect performance (see Table 1). In terms of classifier accuracy, *NAO+SeedSelect* outperforms *Euclidean+SeedSelect* and *Normalized Euclidean + SeedSelect* by +94.3% and +12.3%, respectively. Additionally, in terms of FID score, *NAO+SeedSelect* performs +12.8% better than *Normalized Euclidean + SeedSelect*. For few-shot and long-tail benchmarks, we employed the best strategy as identified in Table 1, namely, *NAO+SeedSelect*.
#### **Q6: Typo in Table 4.**
**A6:** Yes, it should have been 21 sec. Thank you. Will be fixed in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: I appreciate the response from the authors. I apologize for the instances that I missed during the reviewing of the paper and most of my concerns about the lack of experimental conditions are adequately resolved. Also, I'm pleased to hear that the authors will consider including a discussion about the approximation accuracy.
My final question is about the Q5. The reason I thought that the SeedSelect is randomly initialized for *Rare-concept generation* is the lines 255-256.
> SeedSelect [43] is a baseline method where a seed is randomly sampled and *no centroid is calculated*.
Can authors clarify this? Does *no centroid is calculated* mean Euclidean centroid?
---
Reply to Comment 1.1.1:
Title: Euclidean Centroid with Seed Select
Comment: We appreciate your note. Table 2 shows the results from SeedSelect initialized with different methods, *random* in the first row and *Euclidean centroid* (Euclidean+SeedSelect), in the third row. The statement in line 255-256 is misleading. We apologize for that. We will correct this in the revised version.
We’ll be glad to respond to any other concerns that you may have. | Summary: This paper investigates a new method for interpolating in the seed space of diffusion models, which is the Gaussian distribution used to initialize the generation process. Experiments demonstrate that diffusion models struggle with generation when the norm of the input differs from the distribution of norms of random noise samples drawn from the starting distribution; this can happen if the input is a LERP or SLERP interpolation between two random samples. The paper proposes to define a prior over the seed space using a chi distribution, and finds a path such that the likelihood of each point on the path is maximized under the prior distribution. This is optimized in a discretized fashion. A similar method can be used to find the centroid between multiple images, which minimizes the likelihood of the paths from each latent to the centroid. These methods are used to generate additional data, and studied in the context of rare-concept generation, few-shot recognition, and long-tail recognition.
Strengths: - The proposed method is demonstrated to generate effective images in limited data scenarios. By providing a few example images, the model can generate related images by interpolating between the inputs under high likelihood regions of the latent space.
- The method is effective when combined with prior existing methods. SeedSelect optimizes an initial seed to match the concepts in a few given images. When using the NAO-centroid to initialize the seed, the results are more effective and optimization is faster compared to using SeedSelect alone.
- Qualitatively, the results look compelling against other shown interpolations in the diffusion seed space.
Weaknesses: - It seems that the investigation is only performed on the input latent space, but I'm curious if feature interpolation in alternative latent representations would preserver a stronger image prior. For example, Asyrp[1] demonstrates that smooth changes can be obtained by manipulating the h-space of a diffusion model. I think this would be a worthwhile baseline to compare to, rather than just interpolations in the input latent space.
- The method relies on optimization over over a set of points. I think more details on the optimization could be provided here -- for example, how do the results differ if the number of interpolation points is changed, or the optimization time changes? What is the variation in this optimization procedure?
[1] Asyrp: https://arxiv.org/abs/2210.10960
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - in L212, what are the constraints $c(x) \geq 0$?
- Figure 1 left was difficult to understand in the first pass. Perhaps it would help to make clearer that the color refers to log likelihood of the $\chi$ distribution?
- Table 4: should it be 21 seconds rather than minutes?
- Are the optimization times stated per image in Table 4?
- What is the total overall overhead for the long-tail experiments -- how many images are generated per class?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are addressed in the conclusion. The key limitation is that this method requires additional optimization to generate one image, on the order of 30-60 seconds. The most effective use case of this method seems to be in conjunction with SeedSelect to produce plausible images with less optimization time, as the NAO method alone does not always produce recognizable images, as shown in Table 2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for finding our approach effective with compelling results. We address your comments below.
#### **Q1: Comparing with Asyrp [1].**
**A1**: We value the reviewer's suggestion to conduct a comparison between interpolation in the input space (our approach) and interpolation in a feature space (Asyrp). After a deep look, we found that a direct comparison is not feasible due to several reasons. First, The h-space lack properties of interpolation as it is primarily designed for editing of a predefined set of concepts. Second, the h-space edits carried out at each denoising step complicate the prospect of straightforward image interpolation between images in a single space. Despite these challenges, we attempted to interpolate (using different interpolation techniques and NAO) between h tensors of two images through all denoising steps. The outcome yielded corrupted images, underscoring the unsuitability of h-space for effective interpolations.
#### **Q2: More details regarding optimization.**
**A2:** Thank you for your valuable feedback. In response to this comment, we conducted additional experiments, replicating those performed in Table 1 of the main paper. We evaluated our approach by gradually increasing the number of optimization points and analyzed its impact on both FID scores and convergence time. The results demonstrate that an increase in the number of points leads to slightly improved FID scores, with only a marginal rise in optimization time. We will add this experiment and additional details regarding the optimization procedure in our revised manuscript/suppl. material. Variational analysis will be added too.
| | NAO-path | | NAO-centroid | |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| #points | FID (lower is better) | $T_{init}$ (lower is better)| FID (lower is better) | $T_{init}$ (lower is better) |
| 10 | 6.78+-0.1 | 21+-1 s | 5.48+-0.15 | 26+-1s |
| 50 | 6.7+-0.09| 23+-1 s| 5.47+-0.11 | 26+-1s |
| 100 | 6.67+-0.06 | 24+-1 s | 5.43+-0.10 | 28+-1s|
| 1000 | 6.61+-0.05| 27+-1 s| 5.41+-0.08| 30+-1s |
#### **Q3: What is the constraint c(x)≤0?.**
**A3:** $c(x)$ corresponds to the constraint in Eq. 2, namely $|x_i-x_{i-1}| \leq \delta$. We formulate this constraint as $c_{i}(x) = |x_i-x_{i-1}| -\delta \leq 0$. This constraint is part of the integral discretization (into sum of finite elements) in Eq. 2. In the implementation we enforce this constraint using a soft penalty term with ReLU function; that is ReLU($c(x)$) (as described in line 212). Now there is a penalty for positive $c(x)$ (when the constraint is not satisfied). We apologize for the brevity in lines 211-212. We will elaborate on the discretization description of this equation in suppl. material and make sure $c(x)$ is clearly defined in text in the revised version.
#### **Q4: Figure 1 was difficult to understand.**
**A4:** We thank the reviewer for their feedback. As pointed out by the reviewer, the colors in the figure correspond to the log-likelihood of the chi distribution. To address this concern, we will provide additional clarification on the figure and include more detailed explanations in the text for the camera-ready version.
#### **Q5: Typo in Table 4.**
**A5:** Yes, a typo, it should be 21 sec. Thank you. Will be fixed in the camera-ready version.
#### **Q6: Are the optimization times stated per image in Table 4?**
**A6:** Yes, for a fair comparison, we provided the optimization time per image when employing SeedSelect for rare concepts. Notably, our approach significantly reduces the optimization time from 5 minutes to just ~25 seconds and also leads to reduced memory requirements, allowing concurrent generation of multiple images. It is important to mention that for common (head) concepts, optimization is *unnecessary*, and our approach enables direct image generation.
#### **Q7: How many images are generated per class for long-tail learning?**
**A7:** We followed the experimental protocol of [2] and generated samples for each class until the combined total of real and generated samples equaled the count of the class with the highest number of samples in the dataset, resulting in a uniform data distribution. For more details, please refer to Appendix F in the Supplementary material.
#### **Q8: The method requires additional optimization to generate one image.**
**A8:** This is correct for only rare concepts. For rare concepts, It requires additional optimization but NAO computational cost is significantly cheaper than the alternative, SeedSelect. While SeedSelect needs to backpropagate through the diffusion model NAO makes a simple path optimization process in the seed space. Please also refer to the response provided in the shared answer for all reviewers.
#### **References**
[1]. Kwon et al. (2023), "Asyrp: Diffusion Models already have a Semantic Latent Space".
[2]. Samuel et al. (2023), "It's all about where you start: text to image generation with seed selection".
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Thanks to the authors for the response and additional clarifications. I agree with the other reviewers that the baselines presented are relatively simple, but I think the method is promising with regards to tasks in which data is limited by allowing a generator to produce additional samples from the limited data points (few-shot and long-tail applications). However, I find that improving the clarity of the paper would extremely helpful. I have updated my rating accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for your review
Comment: Dear reviewer,
Thank you for your support, for the productive discussion and for the insightful feedback that helped us improve the paper!
We will work hard to improve the clarity of the final version as suggested. | Summary: This paper makes the observation that current training procedures make diffusion models biased toward inputs with a narrow range of norm values. To address this issue, the authors propose a novel method for interpolating between two seeds and demonstrate that it defines a new non-Euclidean metric that takes into account a norm-based prior on seeds. The authors describe a simple yet efficient algorithm for approximating this metric and use it to further define centroids in the latent seed space. The effectiveness of the proposed approach is validated on generating images of rare concepts, and augmenting semantic data for few-shot classification and long-tail learning.
Strengths: 1. The observation that current training procedures make diffusion models biased toward inputs with a narrow range of norm values is interesting and inspiring.
2. The proposed approach is well-motivated and aligns with intuition.
3. The observation and proposed approach have the potential to benefit many tasks related to the application of diffusion models.
Weaknesses: 1. The proposed approach only works well with seed optimization techniques such as SeedSelect, as indicated by Fig.5 and lines 195 - 203. This implies that the derived interpolation paths may not be optimal and it also introduces extra computational cost of the seed optimization.
2. The proposed approach is demonstrated mainly on generating rare images and augmenting data for few-shot learning. However, I am more interested in more applications that might benefit from the proposed approach. For example, can the proposed approach be applied for video interpolation and video generation and perform better than previous approaches?
3. Although significantly better than previous approaches, the results in Fig.3 indicate that the interpolation results of the proposed approach is still not perfect. Do the authors have insights on the reason for the imperfect interpolation results?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for finding our approach inspiring and interesting. We address your comments below.
#### **Q1: The proposed approach only works well with seed optimization.**
**A1:** Let us explain where and why NAO works well without seed optimization and the relation between the two methods. First, we stress that seed optimization is only necessary for generating rare concepts (See SeedSelect [1] and in lines 29 and 62 of our paper). For rare concepts, diffusion models often fail. They may generate high-quality images but from a wrong category. Seed optimization presents a solution for this scenario.
NAO functions effectively without relying on seed optimization. This is demonstrated in Figures 1, 3, S3, and Table 1. NAO's interpolations and centroids enable the generation of semantically consistent images even for **common (head)** concepts where SeedSelect is unnecessary. For instance, head classes like "tuna" and "jersey" have multiple interpretations and can lead to the generation of incorrect objects. "Tuna" might denote either a "tuna fish" or a "tuna can," while "jersey" could refer to a "standard t-shirt" or a "sports team shirt". Given a few reference images with their concept name, NAO can generate images that belong to the correct domain.
To further illustrate NAO’s advantage, we replicated the experiments from Table 1 solely on 50 randomly selected **common (head)** concepts, **without employing seed optimization**. Subsequently, we compared NAO to two alternate interpolation methods in the provided table. The results unequivocally confirm that NAO achieves significantly higher accuracy (in generating the correct concept) and improved FID.
| | Acc (higher is better) | FID (lower is better)|
| ----------- | ----------- | ----------- |
| LERP | 0.00 | 60.60 |
| SLERP | 69.12 | 17.55 |
| **NAO-path (ours)**| **88.91** | **5.21** |
#### **Q2: Can the proposed approach be applied for video interpolation and video generation?**
**A2:** We found the idea of applying our method to video interpolation intriguing, following the reviewer's suggestion. As a result, we applied our approach to the task of video frame interpolation, aiming to generate an intermediate frame between existing consecutive frames (I1 and I2) in a video sequence. Using NAO, we interpolated between the seed inversions of two given frame images, optimizing 50 points, and generating an image from the middle points. Example images in Figure G1 (see pdf attached to the general response to all reviewers) illustrate NAO's ability to create intermediate frames. To further demonstrate NAO’s capability, we conducted also a quantitative evaluation. To this end, we followed the experiment outlined in [2,3], focusing on the **DAVIS dataset**: a widely acknowledged benchmark for Video Frame interpolation tasks. The evaluation of predicted frames against the ground truth was carried out using established metrics like PSNR and SSIM. The table demonstrates that our interpolation approach achieves comparable results to existing methods specifically tailored for video interpolation, despite solely using a pre-trained text-to-image model. We appreciate the reviewer's insightful suggestion, as this presents a promising avenue for future research.
| | PSNR (higher is better) | SSIM (higher is better)|
| ----------- | ----------- | ----------- |
| MCVD [2] | 18.646 | 0.705 |
| LDMVFI [3] | 25.541 | 0.833 |
| **NAO-path (ours)**| 25.413 | 0.813 |
#### **Q3: Do the authors have insights on the reason for the imperfect interpolation results?**
**A3:** As indicated in A1 and also mentioned in [1], rare concepts often have restricted regions within the seed space where plausible and correct images can be generated. This is evident in Figure 3, where NAO performs well for the common concept of a jeep, but for the rare concept of a tiger cat, *most* images appear satisfactory. Despite this complexity, our approach surpasses existing interpolation methods by consistently generating superior, semantically accurate images at all interpolation points.
#### **References**
[1]. Samuel et al. (2023), "It's all about where you start: text to image generation with seed selection".
[2]. Danier et al. (2023), "LDMVFI: Video Frame Interpolation with Latent Diffusion Models".
[3]. Voleti et al. (2022), "MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation".
[4]. Perazzi et al. (2016), "A benchmark dataset and evaluation methodology for video object segmentation".
---
Rebuttal Comment 1.1:
Title: Thank authors for the rebuttal
Comment: Thank the authors for the rebuttal. The authors have addressed most of my concerns so I updated my rating accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for your review
Comment: Dear reviewer,
Thank you for your support, for the productive discussion and for the insightful feedback that helped us improve the paper! | Rebuttal 1:
Rebuttal: Dear Reviewers and ACs,
We were happy to see that the reviewers have found our approach **"interesting and inspiring"**, **"well-motivated" (R1)**, and recognized its **potential to benefit various tasks related to diffusion models' applications (R1, R3)**. Additionally, they have acknowledged that our method is based on **"good theoretical support" (R4)** and have found our experiments to be **“well-designed”**, **“easy-to-follow” (R3)** and **"compelling" (R2)**.
We have addressed the reviewer's concerns in our rebuttal and are open to further discussion. Your input has been instrumental in improving our paper.
### General response to all reviewers
NAO presents an innovative method to interpolate between seeds, revealing a new non-Euclidean metric that takes into account a prior over samples. Our efficient algorithm approximates this metric, facilitating centroid estimation in the latent seed space and enhancing the generation of rare concept images.
#### **Q1: The proposed approach only works well with seed optimization.**
**A1:** Let us explain where and why NAO works well without seed optimization and the relation between the two methods. First, we stress that seed optimization is only necessary for generating rare concepts (See SeedSelect [1] and in lines 29 and 62 of our paper). For rare concepts, diffusion models often fail. They may generate high-quality images but from a wrong category. Seed optimization presents a solution for this scenario.
NAO functions effectively without relying on seed optimization. This is demonstrated in Figures 1, 3, S3, and Table 1. NAO's interpolations and centroids enable the generation of semantically consistent images even for **common (head)** concepts where SeedSelect is unnecessary. For instance, head classes like "tuna" and "jersey" have multiple interpretations and can lead to the generation of incorrect objects. "Tuna" might denote either a "tuna fish" or a "tuna can," while "jersey" could refer to a "standard t-shirt" or a "sports team shirt". Given a few reference images with their concept name, NAO can generate images that belong to the correct domain.
To further illustrate NAO’s advantage, we replicated the experiments from Table 1 solely on 50 randomly selected **common (head)** concepts, **without employing seed optimization**. Subsequently, we compared NAO to two alternate interpolation methods in the provided table. The results unequivocally confirm that NAO achieves significantly higher accuracy (in generating the correct concept) and improved FID.
| | Acc (higher is better) | FID (lower is better)|
| ----------- | ----------- | ----------- |
| LERP | 0.00 | 60.60 |
| SLERP | 69.12 | 17.55 |
| **NAO-path (ours)**| **88.91** | **5.21** |
#### **Q2: What is the constraint c(x)≤0?.**
**A2:** $c(x)$ corresponds to the constraint in Eq. 2, namely $|x_i-x_{i-1}| \leq \delta$. We formulate this constraint as $c_{i}(x) = |x_i-x_{i-1}| -\delta \leq 0$. This constraint is part of the integral discretization (into sum of finite elements) in Eq. 2. In the implementation we enforce this constraint using a soft penalty term with ReLU function; that is ReLU($c(x)$) (as described in line 212). Now there is a penalty for positive $c(x)$ (when the constraint is not satisfied). We apologize for the brevity in lines 211-212. We will elaborate on the discretization description of this equation in suppl. material and make sure $c(x)$ is clearly defined in text in the revised version.
#### **References**
[1] Samuel et al. (2023), "It's all about where you start: text to image generation with seed selection".
Pdf: /pdf/bb5aa2c0d9fbeb499d5a7b6141eaa2eda662cb43.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
The Learnability of In-Context Learning | Accept (poster) | Summary: The authors propose a PAC framework to analyze the expressiveness power of in-context learning in a finite sample complexity scheme. The framework consists of largely two parts, first is the initial pretraining of the next token prediction phase, and the second is the in-context learning phase. Regarding the pertaining data distribution as a mixture of latent tasks. authors provide an understanding that polynomial sample complexity guarantees in-context learnability.
Strengths: - Define a PAC framework to investigate in-context learning
- Reduce the sample complexity needed for in-context learning to polynomial complexity, which is confined to infinite samples in previous work
Weaknesses: - The number of the model parameter is also a crucial factor for in-context learning, but it seems that there is not enough handling for model complexity.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In line 143, the authors say they aim to provide an analysis of in-context learnability of “large model”. With the specified sample complexity, what is the guideline for that “large model”?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Lack of experimental evidence to support the theory (at least experiments on synthetic data would be helpful)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful and supportive feedback.
1. Regarding the model size, intuitively the learning algorithms in Assumption 1 is a large language model that fits the pre-training distribution well enough. By assuming a scaling law behavior with respect to the model size, one might get an emergent phenomenon in which the capability of recognizing certain downstream tasks with in-context learning is emergent from a certain model size which depends on the margin of the downstream task. We will add a discussion on this avenue for expansion in the camera read version.
2. Following your request, we promptly conducted an experiment to support our theory. In particular, the analyzed setting includes as a special case the mixture of HMMs examined in [1]. Therefore, the GINC simulations already conducted in Section 4 of their paper lend support to our theory. Moreover, as shown in the attached figure, our PAC bounds provide finite sample complexity, unlike the asymptotic analysis in [2]. We were thus able to apply these bounds to the GINC dataset. This demonstrates that the analyzed framework predicts the in-context capabilities of pre-trained LLMs.
3. Regarding guidelines for large language models (LLMs), our analysis reveals that the accuracy of in-context learning for large language models (LLMs) is determined by three key factors. First, the ability to recognize relevant tasks among those learned during pretraining. Second, the ability to overcome distribution drift caused by the unnatural concatenation of in-context examples. And third, the performance of the LLM once it identifies the right task. Importantly, our bounds suggest that as the number of in-context examples increases, improvements from the first two factors plateau (Theorem 1). Thus, for further gains, one must utilize a better pretrained LLM (Theorem 2 proof). We will discuss these implications of our theorems in more detail in the camera-ready version.
[1] Xie, Sang Michael, et al. “An explanation of in-context 541 learning as implicit Bayesian inference.” ICLR 2022. | Summary:
This paper attempts to formalise the few-shot in-context learning phenomenon observed in large language models. To that end, they make a set of assumptions about the underlying data-generating distributions, pretrained models, etc and try to formalise the task of in-context learning in the PAC framework.
The main empirical phenomenon that is the subject of study is the ability of LLMs to predict a query input accurately after receiving a sequence of pairs of inputs and labels. Broadly, they prove that if the distributions over strings for pretraining and providing in-context examples are defined in a certain way and satisfy a set of assumptions, then for large enough k, f(y | x_1, y_1, …, x_k, y_k) will be approximately correct with high probability (in the PAC sense) where f is the language model.
In their framework, they assume that the target task during in-context learning is part of the pretraining distribution based on prior observations in empirical works. The hypothesis class for pretraining is a set of mixture distributions of downstream distributions. The downstream distributions can be seen as a distribution over pairs of inputs and labels. The sampling from the pretraining distribution can be decomposed into sampling a task from a prior distribution over the set of tasks and then sampling from the respective downstream distribution corresponding to the task. They assume that we have an accurate language model (in the PAC sense) where the error with the target distribution is bounded in terms of TVD of the conditional next word distribution -- based on the fact that we have LMs that are accurate in modelling the underlying distribution. Given such an accurate probabilistic LM and inputs from such distribution, they show that given a prompt, for large enough k, the error for f(y | x_1, y_1, …, x_k, y_k) with respect to the Bayes optimal predictor is less than ε with high probability.
Strengths:
(S1) I think the formalisation is interesting to some extent, and their analysis provides some intuition as to why few-shot learning, as observed in GPT-3, works to some degree. I think the key idea is to use the fact that the ratio P_A(p) and P_B(p) converge to zero as k gets larger, where B is the target task, and A is a different task. This helps distinguish the tasks and improves the margin between the correct label and incorrect label for the LM as the size of the prompt goes larger.
Another interesting part about their framework is that they treat it as a distribution modelling task, unlike some other recent efforts to formalise in-context learning [2, 3] and derive sample complexities [1].
They have taken a more physics-like approach, where they have assumed and defined the models and data-generating distributions in a certain way based on empirical observations and tried to work out why prompts could lead to correct answers if the pretraining data was not of that form. The four assumptions are not completely unrealistic. At least within their framework, it is somewhat clear why flipping some labels is not as detrimental as one would expect.
(S2) In-context learning has been an intriguing phenomenon, with multiple works seeking to understand it. I think this paper takes a step towards that and could be useful to other researchers working in this area.
(S3) The paper is well written, easy to follow, and the arguments are clearly presented.
Weaknesses: While the analysis and the framework are interesting, I think there are a few issues with the framework that limits its applicability in helping us understand in-context learning.
(W1) The way the pretraining distribution is defined is not necessarily reflective of the real-world data. To be clear, it need not always be and depends on the nature of the theoretical work. For instance, the setup [1-3] is quite simplified, but they still allow us to train Transformers on those specific tasks and test how well the theory predicts the sample complexities in the simplified setting. In the case of this paper, the theory seems to be an attempt to directly model in-context learning in the real-world scenario, and the way the pretraining distribution is defined seems a bit detached from real-world text.
Additionally, if my understanding is correct (correct me if I am wrong), each example from the pretraining distribution seems to be an input example followed by a label from a downstream task. This seems a bit far from the way LLMs are trained as well. To some extent, it seems like the pretraining and downstream distributions are designed to satisfy some properties favourable for the end result but not necessarily reflective of the real data. I understand that it is not possible to precisely define real-world data, but given the goal of the paper, it seems like the value of results does depend on how well the data-generating distribution reflects real-world data.
(W2) It seems like the sample complexities are not necessarily predictive of the number of few-shot examples needed for in-context learning. I think it stems from the way the framework is defined and the disconnect with practice. Standard learning theoretic frameworks such as PAC models are entirely formal abstractions and do not contain assumptions about the real world (apart from train and test being the same distribution) -- hence allowing worst-case sample complexities to be meaningful. It also allows us to analyse new algorithms and learning problems. In this scenario, however, it is difficult to see how this framework can be used to analyse the problem or new algorithms further. The framework used in [1] is simplified but is still within a setting where models can be trained and tested to evaluate how well the sample complexities reflect the true performance within the simplified setting.
[1] Transformers as Algorithms: Generalization and Stability in In-context Learning
[2] What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
[3] Transformers Learn In-Context by Gradient Descent. 2022
-------------------------------------------
Typos:
L266: Section ??
L270: That it -> That is
L305: Section ??
I have given a score of 6 for now, but I am open to changing my score based on the authors' responses and discussions with other reviewers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful and supportive feedback.
1. We agree that in real world data there is an additional distribution shift which is caused by the fact that tasks in the pre-training mixture distribution usually use more soft labels and more flexible input formats. That being said, to the best of our knowledge the analyzed data distribution is still the most realistic one that has been analyzed so far. For example, the analyzed setting includes as a special case the mixture of HMMs that is analyzed in [2]. Moreover, note that the underlying mechanism behind our results is Lemma 1, which can be modified easily for other input formats. Hence, one needs to assume that input examples are followed by a label from downstream tasks, only for bounding the zero-one loss, but soft in-context learning as defined in [1] is guaranteed also with soft inputs formats.
2. Furthermore, our framework allows for an empirical evaluation of how accurately the sample complexities reflect true performance within the rather realistic simplified setting. In particular, since the analyzed setting includes as a special case the mixture of HMMs examined in [2], we can apply our bound to the GINC simulations they conducted in Section 4 of their paper, as shown in the attached figure our bounds correlate well with the trends of real world in-context learning performance, even though they deviate significantly from the exact accuracies. Notably, the analysis in [2] is only asymptotic and does not provide finite sample complexity guarantees. Additionally, our theoretical definitions permit analyzing novel in-context learning algorithms such as CSCGs [3].
#### Details regarding our guarantees for the GINC dataset
The estimation of the KL divergence between the different HMM components significantly influence our bounds and is somewhat unreliable, since the GINC dataset from [2] violates their Assumption 5 and assign zeros probabilities to some strings. To overcome this issue, we cliped the lowest probabilities to $10^{-32}$, and during the Monte Carlo estimation of the KL divergence we omitted samples that violate this lower bound. As a result, we estimate that the KL divergence between the different components to be approximately $15$ and substitute this value into our bounds. For reproducibility, we provide an implementation of our bounds below.
```python
def our_bound(kl = 15., delta = .99, lowest_prob=1e-32, task_length=5, mixture_size=5):
accuracy = np.linspace(1, 99.9, 100)
epsilon = 1 - (accuracy / 100)
a = -16 * np.log(delta) * (np.log(lowest_prob) ** 2) / (kl ** 2)
b = -2 * np.log(2 * epsilon / mixture_size) / (kl + (np.log(lowest_prob) / task_length))
return np.maximum(a, b), accuracy
```
### References:
[1] Olsson, Catherine, et al. "In-context learning and induction heads." arXiv preprint arXiv:2209.11895 (2022).
[2] Xie, Sang Michael, et al. “An explanation of in-context 541 learning as implicit Bayesian inference.” ICLR 2022.
[3] Swaminathan, Sivaramakrishnan, et al. "Schema-learning and rebinding as mechanisms of in-context learning and emergence." arXiv preprint arXiv:2307.01201 (2023). | Summary: The paper presents a theoretical framework for in-context learnability. The framework is grounded in the Probably Approximately Correct (PAC) learning theory and provides the first-ever finite sample complexity results for the in-context learning setup. The authors' approach involves a pretraining phase followed by an in-context learning phase where the training examples of the downstream task are concatenated in the input. The paper argues that, under specific conditions, latent tasks in the pretraining distribution can be effectively learned via in-context learning without changing the model's weights, even when the input significantly deviates from the pretraining distribution. This finding aligns with recent empirical results and suggests that in-context learning is more about task identification than learning.
Strengths: - The paper is well-written and clearly presented;
- The paper tackles an important and underexplored theoretical aspect of in-context learning in LLMs.
- It presents the first-of-its-kind PAC-based framework for in-context learnability, which could pave the way for further theoretical investigations.
. The paper links its theoretical analysis to recent empirical findings, providing a sound validation for its results.
Weaknesses: - While the paper discusses a pretraining phase and an in-context learning phase, it doesn't clearly address how the transition between these phases is managed or optimized. Also, the instruction-tuning phase might be missing to ensure the success of in-context learning and mixture of tasks as defined in the paper;
- Some empirical valuations including flipping labels could also be valuable to add [1, 2];
[1] Min, Sewon, et al. "Metaicl: Learning to learn in context." arXiv preprint arXiv:2110.15943 (2021).
[2] Wei, Jerry, et al. "Symbol tuning improves in-context learning in language models." arXiv preprint arXiv:2305.08298 (2023).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - What if the downstream task is OOD from the pretraining data (which is common to see in terms of the generalization ability of instruction-tuned large language models), will the assumption, and conclusion still hold in Lemma 1?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: - The PAC framework employed may not cover all aspects or scenarios of in-context learning with LLMs (order, variance in prompts, chain-of-thought prompting, generalization, emergent abilities, and so on).
- Some inconsistencies in line 266, 305 with the appendix could be improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful and supportive feedback.
1. Regarding the transition between the pretraining phase and the in-context learning phase, please note that we analyzed vanilla in-context few-shot learning as introduced in the GPT-3 paper [1], which does not have an instruction-tuning phase between the two phases. That said, our Definition 1 of in-context learning is general enough to potentially incorporate an instruction-tuning phase. Analyzing the effect of such a phase is an exciting open problem that we leave for future work.
2. Regarding the ability to generalize to out-of-distribution downstream tasks, our results do not fully explain the capability of learning new tasks not included in the pre-training distribution, as acknowledged in the conclusion. However, since our results also hold under weaker approximate independence assumptions (see our response to o2CM), they cover the ability of large language models to infer latent learning algorithms like linear regression from a mixture of algorithms learned during pre-training . Importantly, inferring the right learning algorithm among already learned algorithms might provide generalization to out-of-distribution downstream tasks. Please see our answer to yYWi for more details about the ability to infer latent learning algorithms.
3. Regarding variance and order variation see our first answer to o2CM.
4. Regarding emergent abilities, see our answer to 86vc.
[1] Brown, Tom, et al. "Language models are few-shot learners." NeurIPS 2020. | Summary: This paper studies the PAC-learnability of in-context learning when the pretraining distribution is a mixture of latent tasks, and the downstream task belongs to one of them. In addition to this mixure-of-tasks assumption, the other non-trivial assumptions the authors make include: the pretrained model can approximate the pretraining distribution arbitrarily close with a polynomial number of examples, and that the adjacent strings to be concatenated are approximately independent. Based on these assumptions, the author provides sample complexity results to demonstrate in-context learnability. Particularly, the correctness of the downstream task labels does not affect the validity of the conclusions.
Strengths: * The problem addressed in this paper, namely theoretical understanding of in-context learning, is undoubtedly important and timely.
* The author addresses the PAC-learnability of in-context learning, which has not been well-studied. The problem formulation and results proposed by the authors are also interesting and novel.
* The overall presentation is clear, with clear motivations and summaries for each part (although several details need to be improved, see Weaknesses).
Weaknesses: * The validity of the overarching mixture-of-tasks assumption for the pretraining distribution may need more justification. It appears to oversimplify the problem given the messy pretraining corpora and the novel downstream tasks used in practice (although it generalizes the similar assumption made by Xie et al. 2022, this limitation still exists).
* The insensitivity to the correctness of in-context examples' labels may contradict recent observations. For instance, Wei et al. 2023 demonstrate that LLMs can do linear regression in in-context learning, which is clearly dependent on labels and thus cannot be directly explained by this paper.
* There are some issues with the current presentation and writing that impede readability, such as:
- The input and output of function $f_\theta$ should be made clearer. Based on the context, its input seems to be a string, but in equation (4) it has a conditional-style input which is not explained in the main text.
- The meaning of $x$, $o$, and $s$ should be made clearer, preferably with some examples. The Kleene star symbol should also be clarified.
- Typos, for instance run-away references to the appendix, Line 297 "other another", Line 362 "works designed" -> "works are designed".
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is the framework extendable to explain the observation where in-context learning can do label-sensitive linear regression?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful and supportive feedback.
1. We agree that pre-training data in the real world is often messy, and we will acknowledge that as a limitation in the camera-ready paper. Note that we already discussed some limitations of our work in the last paragraph of Section 5. Specifically, beyond the capability of learning new tasks, our framework also lacks a formal explanation of the connection between model size and the efficiency of in-context learning.
2. Regarding the ability to perform label-sensitive linear regression, our results do not fully explain the capability of learning new tasks not included in the pre-training distribution, as acknowledged in the conclusion. However, since our results also hold under weaker approximate independence assumptions (see our response to o2CM), they cover the ability of large language models to infer latent learning algorithms like linear regression from a mixture of algorithms learned during pre-training. Importantly, when the different mixture components are learning algorithms, the labels become the dominant contributors to the Kullback-Leibler divergence between components, rendering Theorem 2 ineffective. In such cases, one could prove a similar theory relying solely on correct labels, resulting in more relaxed conditions than the conditions in line 252 ($\Delta_{\text{KL}}>8\log\frac{1}{c_{1}\cdot c_{2}}$). We will clarify the distinction between label-sensitive and label-insensitive results in the camera-ready version. Finally, when a mixture component is a learning algorithm, it is reasonable to assume the appearance of the delimiter token after labels does not constitute a distribution drift from pre-training. Thus, together with the weaker independence assumptions, one could potentially eliminate the condition in line 252 completely.
3. We apologize for the unclear notation. We will make the notation cleaner for the camera version.
* As you correctly inferred $f_\theta$ is a probabilistic model that gets strings as its inputs, and output the likelihood of the input string. Regarding, Equation 4, we apologize for the abuse of notation, the intended meaning of the conditional notation is $f_{\theta}\left(o_{T}\,|\,o_{1}\dots o_{T-1}\right)\coloneqq\frac{f_{\theta}\left(o_{1}\dots o_{T-1}o_{T}\right)}{f_{\theta}\left(o_{1}\dots o_{T-1}\right)}$.
* The meaning of $x$ is the task inputs (see line 99). For example, “What is the name of the first president of the United States?”.
* The meaning of $o$ is a single token (see line 182). For example, “What”.
* The kenner star in $\Sigma^{\star}$ stands for the set of sequences over the alphabet $\Sigma$.
* The meaning of $s$ is a string in $\Sigma^{\star}$ (see line 206). | Rebuttal 1:
Rebuttal: We thank all the reviewers for their thoughtful feedback. We apologize for the broken links to the appendix. The link in line 266 should point to section 1 in the appendix, while the link in line 305 should point to section 2 in the appendix.
Pdf: /pdf/fdd0525b646b645ea401f2208117ba4ac58bd6cb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper aims to explain the learnability of in-context learning. The main idea is that the pretraining tasks learn multiple downstream tasks and the prompt specify a particular task.
Strengths: 1. The paper addresses the learnability of the in-context learning. It's a hot topic and very few theories are constructed from my knowledge.
2. The authors made several assumptions on in context learning and explain why they make the assumptions clearly.
Weaknesses: Though I am positive towards the paper, I think the main weakness is that the theoretical framework is too far away from the practice. The paper tells a good story but I am not sure what kind of guidance can the theory provide for the in-context learning in practice.
For example, it looks that if we provide larger number of examples in prompts, the bound may be tighter. Is this really true for in-context learning? The authors can totally claim that there is computation limitation so we cannot make the number of examples to be as large as possible in practice. But if so, we also do not need in-context learning. The authors kind of explain the mystery of in-context learning. But kind of not. If we can make the number of examples infinity, supervised learning should also work well. I think the gap here is that the authors do not explain why this kind of in-context learning is better than supervised learning.
Overall, I think the contribution is interesting but not significant.
Minor:
There are some ??'s in the paper. The authors probably want to fix them.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: This is a theory paper. Societal impact is irrelevant.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful feedback.
1. Increasing the number of in-context examples has been shown to be beneficial in practice. Evidence for this can be traced back to the GPT-3 paper [1], in which Figure 1.2 clearly demonstrates that performance improves as the number of in-context examples increases. Moreover, these observations have inspired works such as [2] to invent techniques to overcome the computational limitations of using a large number of in-context examples, their figure 1 clearly shows this improvement trend.
2. Your thoughtful feedback has inspired us to prove a simple corollary of Theorem 2, establishing a case where the sample complexity of in-context learning is provably better than that of supervised learning. Specifically, the downstream tasks will involve learning parity functions over unknown subsets of input bits. To reflect that concatenating independent examples is unnatural in pre-training, the pre-training distribution will mix two distributions corresponding to the downstream parity functions. Importantly, new line tokens will usually not follow labels, and successive examples in pre-training will usually share inputs rather than be independent. The VC dimension of this hypothesis class scales linearly with the number of bits, so supervised learning's sample complexity does too. In contrast, since pre-training reduces the hypothesis class to two relevant parity functions, Theorem 2 gives a constant sample complexity regardless of the number of bits. Thus, this is effectively **few-shot** learning. In essence, the corollary shows pre-training followed by in-context learning can significantly reduce sample complexity compared to supervised learning. This demonstrated advantage is notably stronger than existing results like [3] on pre-training benefits for supervised fine-tuning. Please find below a proof sketch for this corollary.
### Proof sketch for a corollary of theorem 2 on the advantage of in-context learning over supervised learning:
To utilize Theorem 2, we employ Algorithm 2 from [4] as our pretraining algorithm. Importantly, Theorem 2 from that paper provides strong formal guarantees, ensuring Assumption 1 of our paper holds. Now, since we allow successive examples in pretraining to be independent with low probability $c << 1$, successive examples are approximately independent given the new line token. Therefore, Assumption 2 also holds. Similarly, we will allow incorrect labels and spontaneous new line tokens to occur with the small probability of $c$. This ensures Assumption 3 holds. To overcome distribution drift, we repeat the parity label one hundred times. This results in the Kullback-Leibler divergence between distributions being exactly $\frac{100}{2}\cdot\log\frac{1-2c}{c}$. Consequently, for small enough $c$, the divergence is greater than $8\cdot\log\frac{1}{c\cdot\left(1-c\right)}$. Overall, we have satisfied all conditions for Theorem 2. Therefore, we can conclude that parity functions over unknown input bit subsets are in-context learnable, with sample complexity independent of the number of bits. Rather, the sample complexity depends only on $c$, which determines the similarity between pretraining and downstream tasks. In contrast, the VC dimension of this hypothesis class scales linearly with the number of bits. Consequently, the sample complexity of supervised learning scales linearly with the number of bits, thus completing our proof.
### References:
[1] Brown, Tom, et al. "Language models are few-shot learners." NeurIPS 2020.
[2] Ratner, Nir, et al. "Parallel Context Windows for Large Language Models." ACL 2023.
[3] Ge, Jiawei, et al. "On the provable advantage of unsupervised pretraining." arXiv preprint arXiv:2303.01566 (2023).
[4] Mahajan, Gaurav, et al. "Learning Hidden Markov Models Using Conditional Samples." COLT 2023. | Summary: In-Context Learning (ICL) allows large language models (LLMs) to be easily specialized to natural language downstream tasks. When users input a concatenated string of examples of a particular downstream task, modern LLMs often perform successfully without changing their weights, providing an effective new angle to tackle multiple NLP tasks immediately without fine-tuning.
This paper defines ICL within the PAC learning framework. The authors find that pretrained LLMs learn a mixture distribution of downstream tasks (though these models maximize the likelihood of self-supervised next tokens or masked tokens on the training corpus). Under the mild assumptions, they show that ICL is provably guaranteed for LLMs to uncover the latent task, improving their performance without modifying any weights.
Strengths: The papers accept the crucial observations while providing theoretical guarantees.
- Even if pre-training is assumed to be a process of learning mixture distribution of downstream tasks, it could never be equivalent to fine-tuning on a target task due to other irrelevant tasks.
- Providing concatenation of independent examples is not natural for pretrained LLMs because they have never encountered such examples while pre-training.
Weaknesses: While some assumptions are claimed mild for learnability context, there are other assumptions which could not be easily acceptable given the practical behaviors of LLMs.
- To make the provable bounds practically useful, both $c_1$ must be close to 1. However, the two strings delimited by the newline are not necessarily on paragraph levels. See the more details on the questions.
- If we increase the size of vocabulary, it would be easier to guarantee the existence of positive $c_2$, but there would be a computational bottleneck while $c_2$ could be still very low, making the bound less practical. See the more details on the questions.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: There are two questions mainly for the Assumption 2 and 3.
(Regarding the Assumption 2)
It is true that we can always find a positive c_1 that can easily bound the fraction in Equation (5) given $s_1$ and $s_2$. One concern is that users of LLM often use the newline character either trying to distinguish different shots of examples or differentiate consecutive paragraphs.
- For the former case, would it be better to bound min and max of the fraction provided with two different orders $(s_1, \n, s_2)$ and $(s_2, \n, s_1)$? This is because the order of few-shot examples actually matter for the ICL performance.
- For the latter case, it is difficult to say that two consecutive paragraphs would be relatively independent. It could be independent in a particular task, but not generally in most NLP tasks. For example, Coreference resolution is barely runnable without the previous paragraphs.
In addition, it seems $c_1$ is defined as a strongly uniform constant (rather than depending on the choice of $s_1$ and $s_2$, and even including any possible NLP task). Would it be too strong? All in all, $c_1$ is likely to be extremely small number, thus making the bound less practical.
(Regarding the Assumption 3)
If the size of vocabulary is very large (typically 50k tokens), it becomes easier to find such $c_2$ (likely another uniform constant), but the $c_2$ would be very small yet requiring computational costs to handle large vocabulary. If the size of vocabulary is not large, it becomes harder to guarantee the existence of such $c_2$. Overall, $c_2$ would likely become a tiny positive number though the assumption would hold.
(Regarding the Lemma 1)
How to justify the last line of Lemma 1, where $m_{\hat{D}}$ can be chosen to be polynomial in both $\log \frac{1}{c_1 \cdot c_2}$ and $\frac{1}{\Delta_{KL}}$ within the context of Equation in Line 252? If so, would be the polynomial time both lower- and upper-bounded by the $\log \frac{1}{c_1 \cdot c_2}$?
(Minor questions)
- Section reference in Line 266 is broken.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: No specific points are described or probed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful feedback.
### Regarding Assumption 2
Our PAC-style guarantees are worst-case in nature and do not depend on the sampled $s_1$ and $s_2$. That being said, your suggestion is a promising direction for extending our framework to input-dependent bounds, which will elucidate the sensitivity of few-shot in-context learning to the order of the few-shot examples, as well as other input specific characteristics. We will add a discussion on this avenue for expansion in the camera read version.
In addition, your comment encouraged us to tighten the analysis in Theorem 1. We were able to relax the requirement that $4\cdot(1-c_1^2) < \Delta_{\tilde{\mathcal{D}}}$ into the weaker requirement that $1-c_{1}^2<\Delta_{\tilde{\mathcal{D}}}$. Please see the updated proof below. Note that with this tighter analysis our bounds are meaningful for any $c_1>0$, and the effect of $c_1$ on the bounds is only logarithmic.
Furthermore, note that our approximate independence assumptions are used merely for simplicity. In fact, the results remain valid even when we relax the approximate independence assumption to a weaker one, namely that the log-ratio of likelihoods according to different tasks is a martingale. Importantly, this weaker assumption will allow the propagation of information essential for coreference resolution, albeit at the cost of clarity. Please find below a proof sketch for Theorem 1, which relies solely on the relaxed assumption. This proof will be added to the camera-ready version.
### Regarding the question about Assumption 3
We agree that this assumption becomes less restrictive as the vocabulary grows larger. Please note that while in practice $c_2$ may be relatively small, the effect of $c_2$ on our bounds is only logarithmic. Therefore, even a small $c_2$ will not significantly impact the bounds.
### Regarding Lemma 1
We apologize for our mistake in line 256, it is not the correct expression, the exact form of the sample complexity in Lemma 1 is the maximum between $\frac{\left(\log\frac{1}{\delta}\right)\left(16T^{2}\right)\left(\log^{2}\frac{1}{c_{2}}\right)}{\left(\bigtriangleup_{\text{KL}}\right)^{2}}$ and $\frac{2\log\frac{1}{\epsilon}}{\bigtriangleup_{\text{KL}}-8\log\frac{1}{c_{1}c_{2}}}$ (see line 20 in the supplementary materials). As you can see, the exact form of the sample complexity in Lemma 1 does not contradict the equation on line 252, and these equations are consistent with each other.
### Updated proof with the approximate independence assumption
Let $\alpha\coloneqq1-\sqrt{\frac{1-c_{1}^{2}}{\Delta_{\tilde{\mathcal{D}}}}}$. Since we assume that $\Delta_{\tilde{\mathcal{D}}}>1-c_{1}^2$, it follows that $0<\alpha \le1$. Importantly, Theorem 1 guarantees that $\Delta\left(p,x,y,\tilde{y}\right)>\left(1-\alpha\right)\cdot\Delta\left(x,y,\tilde{y}\right)+c_{1}^2-1$ after a minor modification. We just need to adjust the term $\frac{\Delta\left(x,y,\tilde{y}\right)}{5\cdot c_{!}^{-2}\cdot c_{2}^{-T}\cdot c_{3}^{-1}}$ in line 39 of the supplementary materials to $\frac{\Delta\left(x,y,\tilde{y}\right)}{\left(1-\frac{\alpha}{2}\right)^{-1}\cdot c_{!}^{-2}\cdot c_{2}^{-T}\cdot c_{3}^{-1}}$.
Now, we can choose $\Delta_{\text{pretraining}}=\frac{1}{2}\cdot\alpha\cdot\left(1-\alpha\right)\epsilon$. The proof of Theorem 2 follows the same logic as the original proof, except we separate into cases based on whether the margin is at least $\frac{2\cdot\Delta_{\text{pretraining}}}{\alpha\cdot\left(1-\alpha\right)}$. In the first case, Theorem 1 assures us that for large enough $k$ the ground truth in-context predictor also has margin $\Delta\left(p,x,y,\tilde{y}\right)$ that is greater than $\left(1-\alpha\right)\cdot\Delta\left(x,y,\tilde{y}\right)+c_{1}^{2}-1$.
Now since $\Delta\left(x,y,\tilde{y}\right)\ge\Delta_{\tilde{\mathcal{D}}}$ and since $\frac{1-c_{1}^{2}}{\Delta_{\tilde{\mathcal{D}}}}=\left(1-\alpha\right)^{2}$, we have that $\Delta\left(p,x,y,\tilde{y}\right)>\Delta\left(x,y,\tilde{y}\right)\left(1-\alpha\right)\cdot\alpha$. From here the proof remains unchanged.
### Proof sketch with the relaxed assumption
Our goal is to find an alternative to Hoeffding's inequality in Equation 8 of the supplementary materials. To accomplish this, we will replace the approximate independence assumption with a weaker one that still enables a concentration inequality. Specifically, we will assume the following sequence of random variables forms a submartingale:
$\log\frac{\mathbb{P_{\phi^{\star}}}\left(p\right)}{\mathbb{P_{\phi}}\left(p\right)}-\left|p\right|\cdot\text{KL}\left(\mathbb{P_{\phi^{\star}}},\mathbb{P_{\phi}}\right)$
where $p$ is a concatenation of $n$ in-context examples. With this more relaxed assumption, we can utilize the Azuma inequality rather than Hoeffding's inequality, thus avoiding the approximate independence assumption. Importantly, this new assumption implicitly includes Assumption 3, as it handles the distribution drift caused by the artificial new line token in an implicit way, rather than the explicit treatment we currently have in Equation 2 of the supplementary materials.
So far, we have successfully adapted the proof of Lemma 1 to incorporate the relaxed assumption. To prove Theorem 1 under this relaxed assumption as well, we must quantify the efficacy of the Bayes optimal classifier that classifies inputs based on the ground truth **prompted** mixture component. For simplicity, we will assume that this classifier performs at least as well as the Bayes optimal classifier that classifies inputs according to the **unprompted** ground truth mixture component. Note that this is a natural assumption when the downstream task is a learning algorithm such as linear regression. Finally, note that with the approximate independence assumption, predictions made using the prompted ground truth mixture component can be associated with those of the unprompted ground truth mixture component, rendering this assumption unnecessary.
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal.
Comment: Thanks the authors for their additional work to clarify my comments and feedback. Indeed this work will be better shine by addressing some of your extension and tighter bounds with respect to my suggestions. Hope these will be included in the final draft and future directions. Upon this feedback, I have increased my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: We thank the reviewer for helping us improve our paper. We assure you that we will include the extension and tighter bounds in the camera-ready version of the paper. Would you consider increasing the Soundness score for our paper as well? | null | null | null | null |
Collaborative Alignment of NLP Models | Accept (poster) | Summary: This paper frames operationalizing concept as a solution to enumerate all possible concepts and developed a corresponding framework CoDev, that starts by collecting text and labels from users, then have GPT-3 generates text and labels. When there are disagreement among users and GPT-3, users would be asked to relabel the instance until convergence. The authors conducted experiments using Amazon review dataset and MNLI to compare CoDev with baseline models. Also the author conduct a pilot study with 4 people using CoDev from providing data to improve model alignment and more complex output after 5-7 iterations.
Strengths: The paper points out an important questions that NLP model align with multi-users values. And it is novel to use GPT-3 as the global model and Roberta-large trained on users input as the local model, and try to converge the disagreement from two models through collecting user labels iteratively.
Weaknesses: 1. Given the paper, that the authors handle disagreement by repeating collecting user labels until convergence, assumes that there exist a single ground truth for each instance. But many studies have shown that a single instance can have multiple acceptable answers [1, 2]. The study design also conflict with the study goal of aligning multi-users' value. Disagreement is not a sign of error but could be a sign of multiple possibility. How could you determine the disagreement between the local model and the global model is not acceptable?
[1]Wan, R., Kim, J., & Kang, D. (2023). Everyone’s Voice Matters: Quantifying Annotation Disagreement Using Demographic Information. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14523-14530. https://doi.org/10.1609/aaai.v37i12.26698
[2]Aida Mostafazadeh Davani, Mark Díaz, Vinodkumar Prabhakaran; Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations. Transactions of the Association for Computational Linguistics 2022; 10 92–110. doi: https://doi.org/10.1162/tacl_a_00449
2. The authors mentioned the shortcoming of having expert label datasets in the introduction. But actually many open sourced dataset for machine learning studies are labeled by crowd workers from third party platforms such as Amazon MTurk. And the determination for sentiment or toxicity is a very subjective task, that is there is no expertise to judge whether others feels the text is toxic or not. Therefore, the research gap mentioned in the introduction and the dataset/task picked for the experiment make author's framework and argument weak.
3. For table 2, it is not clear how SB is different from biased SB. And it is also not clear how Data collection method is different from base line BERT model and CoDev model. And many clarity concerns as listed in the following Question section. These confusions make the paper less convincing.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. There is no clear definition for your local model and global model other than the figure 1. Could the authors please clarify why and how they use GPT 3 as a global model? What prompts are used?
2. It is not clear whether the d step (repeating labeling) in the Figure 1 is involved with a single user during the iteration or multiple users. If multiple users, how did the authors handle the disagreement among users? If that's different user for different round, this need to be clarified. If single user, how could the CoDev system enable multi-user interaction?
3. For the pilot study, it is not clear how the authors collect the seed data. Are those labels given by single user for each instance? If each instance collect labels from multiple users, how does the authors aggregate the labels?
4. For the pilot study, it is is not clear how the authors evaluate their study. If better alignment is the evaluation metric, how do the authors evaluate the quality of alignment? In the line 289 and 290, the authors said 'As the user made repeated adjustments to both models, the disagreements gradually centered around the concept boundaries, reaching a point where users could no longer determine the correct behavior.' It is not clear why users could no longer determine the correct behavior is the standard for good alignment. Shouldn't users always have the right and ability to determine what they think is correct?
5. In Table 4, why is there only labels for the 'Seed data' column but the 'Initial rounds' and 'Final rounds' don't have corresponding labels? Are the Initial rounds and final rounds evaluate alignment based on the labels? And the Table 4 caption mainly highlight the text in final rounds become much complex than the initial rounds. Why is this important? Is this an evaluation criteria for the pilot study?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors discussed the limitation of solving literal disagreement since they only handled interference caused my machine learning shortcoming.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns', 'Ethics review needed: Inadequate Data and Algorithm Evaluation']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your comments and constructive feedback.
W1) As mentioned in the general rebuttal we agree that literal disagreements and multiple possibilities is a very important question but even in the absence of literal disagreements, the problem of ML interference still remains and that is the main focus of this paper. We appreciate your comment and references and we will add an extra discussion section to address this issue.
W2) We agree that many open sourced dataset are labeled by Amazon turk however, we are focusing on post-training adjustment. For example in the NLI task, the dataset is collected through third party labeling but further studies showed that it has low performance in concepts such as downward and upward monotonicity. In response, some papers collected data to address the low performance in these concepts but it can easily be shown that the whole concept is not covered.
Furthermore, collecting data and labeling is very challenging for niche concepts that collecting data is hard. Our work uses LLMs to address such issues. In other words, we use LLMs as an infinite pool of unlabeled data and CoDev helps users to find and label data points in this pool that belong to their concept.
We totally agree with you that tasks such as sentiment and toxicity are very subjective tasks, and different users might have different opinions; the goal of pilot study on such tasks was to show that even a single user needs assistance to operationalize their concept (more details are below in q3-5).
W3) We apologize for the ambiguity of text for this section, we will improve the writing in camera-ready. In summary, as stated CoDev uses LLM to generate data for a concept. To start the process users need to provide some seed data, the point of this experiment is to show that even in the very extreme case of biased seed data, CoDev can still generalize and cover the concept.
- SB here is all reviews containing the word “skin” or “battery”
- biased-SB contains reviews that are positive and have word “skin” in them; or that are negative and have word battery in them.
- Base model is a model that has low performance in reviews containing skin and battery and our goal is to improve the performance of this model
- Normal data collection by definition only adds data that are biased (i.e., reviews that are positive and have word “skin” in them; or that are negative and have word battery in them). As a result, data collection leads to very high performance in biased-SB but low performance in SB.
- CoDev on the other hand improves performance in both SB and biased-SB; even though it started from biased seed data.
---
Q1) Both local and global models are classifiers (roberta-large in our experiments). GPT-3 is used for generating sentences within a concept. Lines 94-96 explain the process. In particular, our goal is to find sentences that belong to a concept. In order to do so we iteratively use m examples from the concept as a prompt to GPT-3. In response GPT-3 generates more examples within the concept. We apologize for the confusion and will make the definition more clear.
Q2) For simplicity, Figure 1 is only showing the steps in CoDev for a single user. In section 3.4 we explain interference and have we handle it. We will add a new figure with multiple users in the appendix for clarity.
Q3) For each row, the seed data is written by a single user in his/her own concept.
Q 4-5) This pilot study is a small part of our experiment section, our goal for conducting this small pilot study was to show:
- Humans need assistance to make their concept clear. For example in toxicity - islam case, the user thought they have a good grasp of the concept in their mind. However when the user states that “ISIS is violent” is non-toxic but “muslims are violent” are toxic; CoDev asked the user about Taliban, Hezbollah, radical Islam, etc. and as shown in Table 4 this back and force resulted in sentences that user has not think about such edge cases beforehand and could not come up with a label (i.e., require more thinking to make their concept clear).
- As users continue working with CoDev, the number of disagreements between local model and global model decreases and gets limited to sentences that users might not count as clear misalignment to their concept.
For more thorough analysis of CoDev please refer to other parts of our experiment section.
We believe that helping users to operationalize their concepts and handling ML interference are important challenges in alignment and we hope these comments address the listed weaknesses enough and you consider changing the score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response and clarification. Please consider integrating these clarification, explanations and the discussion into the manuscript. After reading the rebuttal, I decided to change the score from 3 to 5. | Summary: This paper proposes a new framework to debug NLP models. Specifically, while debugging a global model, it starts with training some local models on specific concepts. Then new data used for model improvement is labeled if the global model disagrees with the local model.
Experiments indicate that the proposed approach (CoDev) outperforms the baseline approach (AdaTest) over multiple settings.
Strengths: 1. This work focuses on an important but under-explored problem: debugging NLP models. The findings could potentially benefit the community.
2. The paper is well-written, easy to understand the contents.
Weaknesses: 1. There is a gap between theoretical guarantee and real experiments. Specifically the paper experiments with LLM, but provides proof based on linear regression.
2. It's unclear if some of the results are statistically significant. For example, in Table 3 (about MNLI and sentiment analysis), the improvement seems marginal.
3. All experiments seem to focus mainly on toy tasks like classification or NLI. The takeaway would be stronger if the paper has discussions about extending to more challenging NLP tasks such as structure prediction.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Missing discussion about active learning in related work.
2. (see weakness 3) How to extend the proposed framework to structure prediction tasks? Also, which NLP tasks do you think the proposed approach would fail?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, this paper includes the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your constructive feedback.
- W1) Unfortunately providing theoretical results for deep neural nets (and transformers in our case) is an open problem however, previous work (reference 10,11,12,13) consider overparameterized linear regression as a way to provide some insights on how these models might work. We followed this line of work and showed how our method works in overparameterized linear regression settings. The whole point of the theoretical section is to show that similar to our intuition, if learning the desired function in a local neighborhood requires a few samples then we can learn a concept in isolation with few queries (instead of asking users a lot of queries, e.g., look at fig2(c)). However adding other concepts can cause interference (e.g., see fig2(b)). Our theory bounds how many new queries we need to ask the user due to interference (as shown in proposition 1,2, number of queries can be a lot smaller than the dimension of inputs (d)).
- W2) We ran each experiment in Table 3, 10 times and reported the average. We ran the t-test on the distributions although the improvement is small but the t-test shows statistical significance. We will provide the t-test results in camera-ready.
- W3) For non-classification tasks such as next word predictions we can assume that the local functions are a mapping from pair of (input, output) to a binary label of if the output is desired or not (e.g., (all muslims are, murderer -> non-desired) but (all muslims are, going) -> desired). Therefore, the local functions can still be binary and simple while the global function can be complex (e..g, generation instead of classification). Thank you so much for this comment and we will add a discussion on how to extend this work to more sophisticated non-classification tasks. We also like to note that we have experiments on multi-class classifications in our experiments, not just binary.
---
- Q1) We appreciate your comment and will add an extra section on comparing with active learning methods. In summary, CoDev learns each concept individually in isolation and only query inputs where global and local disagree. Thus it does not need to query part of space that local and global overlap (i.e., agree). Also learning local concepts in isolation allows CoDev to better learn the local concept in comparison to active learning methods that only adds data to the global model and might only memorize the local concept data.
- Q2) We explained how to extend this work to more complex scenarios in (W3). This approach fails for a user that considers a very general concept that cannot be divided into subparts.
We hope these comments address the listed weaknesses enough to warrant a change in score. | Summary: This paper describes a multi-user collaborative model alignment framework that teaches certain desired concepts (behaviors, rules) to large language model (LLM).
The authors train a global model that intergrates the original data and all concepts, and a local model for each concept. The LLM is guided to generate examples where the local and the gloabl model disagree, and these examples are presented to users for annotation. Finally, these new annotations are used to update local and global models.
The proposed framework is evaluated on cases that teach an NLI model about downward- and upward-monotone concepts and a sentiment classifier about Amazon products reviews.
Results show that it outperforms AdaTest and other baselines.
Strengths: * the targeting question is an important question and has attracted a lot of research interest. the proposed approach is easy to understand and experimental results show that it is effective, especially when there is interference between multiple concepts
Weaknesses: * the experimental setup is not described in detail, it may not be easy for other researchers to reproduce the study
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Line 29: 'does not to lead to'
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback. As mentioned in the general rebuttal, we will release code and data in the camera ready, and we will make the experiment section more clear. Hopefully this information removes this weakness, leading to an improved score.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and other reviews. | Summary: The paper proposes a method based on data augmentation and instance selection for training a supervised model to be aligned with "concepts", where concepts dictate specific model behavior on certain inputs. In the proposed setup users illustrate a concept with a training example, this is followed by generating additional examples with GPT3, the acceptance of the generated examples is proportional to the disagreement between a global model trained on all the data and a local model trained to learn the concept perfectly. In experimental evaluations the proposed instance selection method seems to outperform reasonable other strategies for selecting data for aligning supervised models on specific concepts.
Strengths: - The proposed data selection method seems original, the presented method with an LLM data generator in the loop is under explored.
- The problem of enforcing user desires on a supervised model is meaningful.
Weaknesses: - The proposed method for aligning user "concepts" seems somewhat laborious or frustrating from a user-stand point, especially in cases where users concepts conflict with those of others. E.g. the paper notes, "In practice, this means that a user adding a new concept needs to make sure it does not break any concepts from other users, a process similar to regression testing in software engineering." -- What happens in cases where the users concept cannot be incorporated without breaking somebody else's concept?
- The experimental setup of the paper is hard to follow and is unlikely to be reproducible.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - The idea of a concept seems a little vague; please consider discussing what kinds of user intents may be captured in the concepts. Perhaps with an eye toward realistic concepts that users want to see alignment on. As I see it, the concept seems like a rule for labeling specific input instances based on easily examinable features of the input.
- Many of the ideas described in the paper (user-defined concepts, conflicts between then, training on user concepts) seem similar to the line of work on programmatic creation of datasets with weak labelers. Please cite and discuss similarities and differences to this line of work: https://scholar.google.com/citations?user=rfwwtFYAAAAJ&hl=en
- The paper (through references of similarity) and especially the experimental setup (first two sub-parts) heavily depend on familiarity with prior work. Please describe the experimental setup and these prior works in greater detail in this paper.
- This is perhaps a matter of style and framing, but I would recommend renaming the title of this paper. The presented work seems to be much more specific in its scope than the title seems to indicate. The presented approach relies on users labeling data (significantly devalued labor in ML) to align models with their preferred concepts, a large body of work may be framed as "collaboratively developing NLP" models by this naming logic. Further, the explored tasks are simplistic classification tasks; arguably, NLP encompasses a significantly broader range of tasks than these alone.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper discusses limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your constructive feedback.
W1) In this work we focused on post-training adjustment to enforce business rules, rectify undesired behavior, or align with user values. A concept relates a set of inputs to desired behaviors, e.g. “religion does not connote sentiment”. For teaching a model about a concept we should create inputs as well as outputs. As you mentioned one way to do so is to write a program to automatically generate a dataset for a concept. However, concepts can be more complex and writing some programs cannot cover the whole input space. For example, Checklist (reference 14) is similar to your references of programmatic creation of data. In particular, Checklist creates a dataset with some rules for each concept. However as shown in the first part of experiment section (Table 5) these programmatic patches will miss a lot of subtleties for these concepts (i.e., after teaching a concept to a model we can simply run CoDev to find a lot of bugs that have not been covered). CoDev on the other hand utilizes GPT-3 to generate inputs that cover the concept more thoroughly. In other words, writing a program to generate inputs only covers a small part of the concept while random walk using GPT-3 as described in 3.1 can cover the majority of the concept space.
Furthermore, as shown in the pilot study, labeling of inputs from a concept can be tricky and cannot be written with a simple program. For example, in the toxicity-islam case in table 4 the user starts with stating that “ISIS is violent is non-toxic” but “Muslims are violent is toxic” and then CoDev asks the user about sentences like “Taliban is violent”, or “Hezbollah is violent”, etc. Thus the user should interact with the model to make the labeling more clear, this cannot be done with simple programs.
Regarding Interference, a user wants to add their concept without breaking the model on other concepts. The presence of possible conflicts is annoying, but that is not a function of our method, it's a function of the real world complexity and our method helps solve it. The alternative of not having explicit concepts and just labeling data would not work and would be more frustrating than handling interferences. For literal disagreement between users please see the general rebuttal.
We also like to note that our method making the concept alignment less laborious for the user by generating sentences using GPT-3 and only requiring user to label them, also the majority of alignment checks is happening between local and global model and user is only queried when there is a disagreement between local and global function (section 3.3).
W2) As mentioned in the general rebuttal we are going to release code and data for reproducibility. We will also make a significant improvement on the writing of the section.
Q1) We addressed this question in (w1). Also as shown in the experiments concepts can be things like downward or upward monotonicity in NLI (a well-known problem in state of the art NLI models), or simpler concepts such as “X person = not antonym(X) person“ in QQP that are revealed by [14] that SOTA models performs really bad on them (table 5 in appendix) or even more complex concepts as depicted in the pilot study.
Q2) Checklist (reference 14) is similar to programmatic creation of data. In particular, Checklist creates a dataset with some rules for each concept. However as shown in the first part of experiment section (Table 5) these programmatic patches will miss a lot of subtleties for these concepts. CoDev utilize GPT-3 to align the model with the concept completely and also handle interference between concepts. Thanks a lot for the references, we will make a more thorough comparison to programming creating datasets with weak labelers in camera-ready.
Q3) Thanks a lot for the feedback, we will make the first two sections of the experiment setup more clear for the camera-ready.
Q4) Thanks for the feedback we will make the title more specific.
We hope these comments ameliorate the weakness listed enough to warrant a change in score.
---
Rebuttal 2:
Title: Acknowledgement of rebuttal.
Comment: I have read the author rebuttals and other reviews. The rebuttals are illustrative, and I encourage the authors to make the changes they have commited to in their response.
I stand by my original assessment of the paper: It presents a well-scoped novel method and is well-executed, however, it presents work that is no more than moderate-to-high impact in my best judgment. | Rebuttal 1:
Rebuttal: We like to thank the reviewers for their constructive feedback, and stating that the problem we are considering is “important” and “beneficial to the community”, also stating that our “approach is novel”, our “theory provides some insights”, the paper is “well written” and “experiments show the effectiveness the method”. Following we are mentioning some of the concerns shared by the reviewers.For other concerns please see individual responses.
---
## Literal interference between users:
---
Reviewers 2 and 5 have pointed out that our work does not address literal interference, where two users assign different labels to the same sentence. We acknowledged this limitation in the paper and highlighted it as an avenue for future research. We will make this more clear in the revision, and note that the interference we focused on still remains as an important problem even if there is no disagreement between users.
To clarify, there are two distinct types of interference when aligning a model to multiple users (or training a model on different distributions):
1. Literal interference: where users disagree in labeling the same sentence.
2. ML interference: interference arising from over-generalization in machine learning.
Both challenges are critical, yet they address different facets of the problem. Our paper particularly addresses the latter, ML interference. Previous research [1,2,3] has shown that any local change in ML models can interfere with other parts of the model (e.g., comparing fig3(a) and fig3(e) you can see that a change inside the concept caused a change outside the concept). We want our model to not overfit (i.e., only memorize the training data) and generalize to unseen data; however this generalization is very dependent on other concepts and the previous data. Take, for instance, a world where only bananas are yellow. An ML model might be guided by a user to recognize bananas solely based on their yellow color. Now if a new user introduces another yellow object, like corn, the model must discern other distinguishing features. Merely combining training data for two such concepts doesn't suffice (as shown in [2]); the boundaries must be distinguished. Our work outlines methods to mitigate this type of interference.
Finally we like to note that while we haven't introduced a specific mechanism to resolve literal interference, our method can surface such interference. This can pave the way for resolution through discussions, voting, or even tweaking the model to reflect multiple perspectives, especially in cases where, as R5 noted, there isn't a consensus among users.
We apologize for the ambiguities in text. In the new version we delineate both types of interference, emphasizing the intricacies of ML interference. We will also add an experiment to surface disagreements between two local models (i.e., proxies for humans) to showcase the effectiveness of our method to surface disagreements. We again acknowledge that handling literal disagreement is very important but out of scope of this work and great future direction.
---
## Some ambiguity in the experiment section and reproducibility of results
---
Reviewers have mentioned that some parts of the experiments are hard to follow and might not be reproducible. We are releasing the full code and data in camera ready, so the experiments can be easily reproduced. Even though some experiments have humans in the loop and thus inherent variance, the gap between AdaTest and Codev (Table 2 and line 226) is large enough that this should not matter.
Regarding the ambiguity we will make a huge improvement on writing and apply all your feedback in text. In summary the following are the message of the four main section of our experiments:
- CoDev works better than AdaTest by finding more bugs and causing no interference (in comparison to AdaTest that causes interference).
- CoDev works even when the seed data is biased
- CoDev sampling mechanism outperforms random or uncertainty sampling
- A very small pilot study to show humans need assistance to operationalize their concept (i.e., we showed that they might not even know the exact boundaries of their concept beforehand).
---
[1] Khani, Fereshte, and Percy Liang. "Removing spurious features can hurt accuracy and affect groups disproportionately." Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.
[2] Raghunathan, Aditi, et al. "Adversarial training can hurt generalization." arXiv preprint arXiv:1906.06032 (2019).
[3] Srivastava, Megha, et al. "An empirical analysis of backward compatibility in machine learning systems." Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a framework for collaborative NLP development (CoDev) that enables multiple users to align a model with their beliefs.
The proposed work CoDev aids users in clarifying their concepts (an area humans often struggle with) and assists ML models to handle conflicts between concepts (an area ML often struggle with due to its inability to accommodate local updates).
The authors' main insight is learning a local model for each concept, and a global model to integrate the original data with all concepts. The authors then steer a large language model to generate instances within concept boundaries where local and global disagree. Their experiments show CoDev is effective at helping multiple users operationalize concepts and avoid interference for a variety of scenarios, tasks, and models.
Strengths: 1. The paper is pretty easy to follow and propose a new framework that is useful for controlling LLMs to constrain the outputs according to user defined concepts.
2. The idea of creating proxies from local models for user concepts is novel and interesting.
3. The experiments with real world users is nice, as this work is all about collaborative development from users.
4. There are some theoretical analysis in the proposed framework which is nice and provides additional insight.
Weaknesses: 1. I think more discussion and datasets could be used in terms of CoDev with biased seed data and CoDev with unlabeled data. Since the work is very central about the data distribution and checking for controlling models via user input, I think a more diverse type of dataset could be used, instead of just selecting positives and negatives from review dataset. It might provide better evidence in generalizability of such framework.
2. Since AdaTest is a major related work, maybe some more comparison with it in the seeded datasets and unlabeled data would be nice?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Above.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations and societal impact are properly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your constructive feedback.
- W1) We chose only positive and only negative cases to showcase that even in a very extreme case of bias in the seed data, CoDev can still generalize to the whole concept. We will add extra experiments with different distributions such as other segments of this dataset with less extreme biases.
- W2) Adatest iteratively uses 7 examples from a concept to generate more examples, and thus it cannot work with unlabeled data. As a result we cannot compare CoDev against Adatest when unlabeled data exists. We will make the difference between AdaTest and CoDev more clear in the camera-ready. We note that where we can, we do compare to AdaTest (first section of the experiments), both for finding bugs and fixing bugs without interference (Table 1 and line 225-229).
We hope these comments ameliorate the weakness listed enough to warrant a change in score.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and other reviews.
Thank you very much for the explanation as they help clear things up a bit. | null | null | null | null | null | null |
Privacy Auditing with One (1) Training Run | Accept (oral) | Summary: The paper improves the computational efficiency of auditing differentially private machine learning systems by connecting differential privacy and statistical generalization. The authors propose the first 1-round scheme compared to the standard solutions with hundreds of training rounds.
The auditing procedure operates as follows. 1) Randomly identify $m$ data points to either include or exclude. 2) Run the algorithm on the randomly selected dataset. 3) Given the algorithm's output, the auditor “guesses” whether or not each data point was included or excluded.
The main theoretical contribution is an improved analysis that is tailored to yield tight bounds. The auditing scheme requires minimal assumptions. The authors theoretically justified the improved efficiency of auditing via membership inference on multiple examples simultaneously. The paper shows that standard membership inference attacks can be used for auditing analysis, i.e., exploiting the parallelism of multiple independent data points in a single run of the algorithm in lieu of multiple independent runs.
In experiments, the authors audited DP-SGD training on a WideResNet model, trained on the CIFAR10 dataset across multiple configurations. The experiments contain both gradient and input attacks. The experimental results confirm the contributions claimed by the authors before.
Strengths: + The paper gives the first scheme of one-round privacy auditing, which is computationally efficient. This is a totally new perspective, jumping out of prior literature.
+ Theoretical foundation has been well built, i.e., Theorem 3.1. The formulation of privacy auditing bridges guess and privacy, which can be adopted in future instantiations/applications.
+ The experimental results are convincing, i.e., confirming the paper's contribution.
+ The writing is excellent. The authors clearly introduce the scheme, and meantime, show their deep insights and creative knowledge. At least, I feel I learned a lot in this paper.
+ Personally speaking, I think this paper will motivate many future works.
Weaknesses: I did not find the weakness.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The paper is self-contained. The details in the supplementary are very sufficient.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors studied the limitation comprehensively.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review!
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply!
I have read all contents on this page. I will keep my rating. | Summary: The paper gives a simple version of a differential privacy (DP) auditing, and the proposed method is related to the recent works (G. Andrew, P. Kairouz, S. Oh, A. Oprea, H. B. McMahan, and V. Suriyaku- mar. “One-shot Empirical Privacy Estimation for Federated Learning”, 2023 and S. Zanella-Beguelin, L. Wutschitz, S. Tople, A. Salem, V. Ruhle, A. Paverd, M. Naseri, and B. Kopf. “Bayesian estimation of differential privacy”, 2022). There is also a theoretically rigorous analysis for the proposed method.
Andrew et al. (2023) work seems very much related (does also one shot auditing) however more heuristic. This paper provides confidence intervals for the epsilons, whereas it seems to be an open problem whether the empirical epsilons by Andrew et al. (2023) can be rigorously connected to the theoretical guarantees.
The idea in the proposed method is simple: certain amount of data (canaries) is held out for auditing such that each sample is randomly included in the training, and in the end the total number of correct guesses (whether sample was or wasn't in the training data) will give the empirical epsilons (high-probability lower bounds). This ratio can be connected to the theoretical guarantees via Theorem 3.1. The main result is the analysis of the method (Theorem 3.1). The experiments are very similar to those used in the recent auditing paper by Nasr, Hayes, Steinke, Balle, Tram`er, Jagielski, Carlini, and Terzis.
A pathological example in the shows the tightness of the epsilons give by this auditing method, and also includes some discussion about the looseness of the bounds in realistic situations.
Strengths: - The idea is simple, the analysis sound and the paper is easy to read. All in all I think fits well to NeurIPS
- The analysis is tight in a sense that there are pathological DP algorithms for which the empirical epsilons are tight.
- Contribution seems clear: puts one shot auditing on a more rigorous footing (more rigorous than the previously proposed approaches)
Weaknesses:
- It remains a bit unclear from the paper, how does it compare to other auditing methods? It would be nice to see how it compares to e.g. to the empirical epsilons given in [NHSBTJCT23]. Especially in case of black box attacks there seems to be a big gap. Also would be interesting to see how the result compare to the results given by the one shot method of [AKOOMS23]. Even if the epsilons reported here are on a more rigorous footing than those of [AKOOMS23], I would guess that in practice one might end up using the method of [AKOOMS23] if it tends to give much more realistic epsilons.
- It still remains a bit unclear how useful this would be in practice. Especially in the black box setting the results look weak: already with 10000 samples (all used for auditing) or small amount of additional samples (not used for auditing), the empirical epsilons are really far from the theoretical ones.
- One small weakness is perhaps certain lack of originality: the main contribution is the mathematical analysis, the main ideas behind this auditing method seem to be laid out already in the recent works [ZBWTSRPNK22] and [AKOOMS23].
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In the white box setting, your experiments show that increasing number of data samples increases the epsilon lower bound. In Figure 3 you have Cifar-10 results up to $m=5000$ samples. Did you try for $m>5000$? How close can you get? Why not to report results for $m>5000$?
In the white box setting, where you insert the canaries and get the best results: how expensive is the attack itself? I mean if you construct the canaries at each step, how expensive it is compared to the model training?
This approach is for auditing an ML system, and then it seems reasonable to use a small amount of samples and report the epsilon lower bounds. What if the protection gets really strong for realistic training data sizes, are those epsilons meaningful anymore? Would there be a way to make the method dataset/model specific?
Typo, p.5:
"First, we evaluate the effect of the number of the auditing example"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, there is an extensive discussion in the end about the limitations of this approach. The extended version (supplements) has also a discussion about possible directions for future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, which we respond to now.
**On comparison with previous works:** Nasr et al. [NHSBTJCT23] uses multiple runs and, as a result, they achieve tighter bounds than we do for the same algorithm; however, we are not sure if there is any meaningful comparison that we can do here as their method cannot analyze our setting and vice versa.
Andrew et al. [AKOOMS23] propose an epsilon estimation technique which may not be a lower bound; their results in Table 1 show an estimate of epsilon that is larger that the provable epsilon upper bound. In the version that was available before the NeurIPS deadline their estimated epsilon actually significantly overestimated the true epsilon i.e. the theoretical upper bound was 50 and the estimate was 78. Given that their approach does not produce lower bounds on epsilon, a direct comparison is not very meaningful. An interesting direction for further work is to combine our methods with their approach.
**On weakness of the black-box results:** We agree that in the black-box setting our results are weak, as is the case in all previous work. The black-box setting is very challenging, and there is a need to develop better attacks. We note that our analysis can then be applied to better attacks that are developed in the future.
**On novelty:** The idea of using multiple canaries to audit DP mechanisms has been mentioned for years now; the first paper on auditing DP-SGD by Jagielski et al. [JUO'20] introduced multiple examples and used group privacy to analyze their results. The main novelty of the work is in creating a framework to theoretically analyze the setting and to provide a tight mathematical analysis.
**Experiments with $m>5000$:** We didn’t experiment with more canaries due to memory limitations. In Figures 9 & 10 we simulated results for up to 1,000,000 and 100,000 examples, respectively.
**Cost of the attack:** In the whitebox setting, creating canaries is very simple as they are simple gradient vectors that have large value on a single coordinate. The main cost is computing the score function, which requires comparing each gradient update vector with all of the canaries, which is a dot product between two large vectors. This cost can be reduced in future by only focusing on a specific layer in the network or small set of parameters. In the black-box setting, the canaries are simply examples and we only need to evaluate the final loss, which is much easier than computing inner products.
**Would there be a way to make the method dataset/model specific?** The choice of auditing examples/canaries and the score function can be tailored to the dataset/model.
---
Rebuttal Comment 1.1:
Comment: Thanks for the replies! I am keeping my score. | Summary: The authors propose a scheme for auditing differentially private machine learning models with a single training run (instead of thousands as have been used so far).
Strengths: 1. limitations clearly explained and illustrated
2. paper is really well structured (except for related work; see weaknesses)
3. important impact within DP auditing
Weaknesses: 1. It would help the reader if the related work was moved to before the empirical evaluation. Methods from the empirical evaluation are adapted from related work so more explanations earlier on might help with understanding
2. the comparison with related work falls a bit short. it would be nice to whether and when the heuristic proposed by MEMPST21 is not practically applicable. If they provide a better bound that holds true in most scenarios (even if there is no mathematical guarantee for it), it might be preferable to a loose bound with mathematical guarantee.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. in ll 123 f., the authors say that the design of the attack is orthogonal to the analysis of the privacy attack. I wonder whether this claim is necessarily always true. Design methodology introduced for attacks with 1000s of runs might suffer from a high variance that could be maybe be highly detrimental for a single training run?
2. Do the authors believe that the methodology could be extended to multiple training runs to have a tradeoff between either 1000 or 1 runs?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: the limitations have been adequately addressed and illustrated
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, particularly the suggestions regarding the related work section.
**1. Line 123:** We will clarify that there may be overlap between the design of attacks and our analysis. But, conceptually, most auditing attack design considerations are the same in our setting and in the setting of prior work. That is, we are focused on creating examples that have a high effect on the training procedure and then detecting that effect. Nevertheless, we will make it clear in the text.
**2. Multiple runs:** Extending our analysis to handle both multiple runs and multiple examples is a very interesting topic for future work and something that should be possible. However, we do want to emphasize that, for large models, training a single model is very expensive, and we think it is also important to improve the single model setting.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. After having read the rebuttal and the other reviewers' comments, I am more confident that this work constitutes a good paper hat should be accepted. | Summary: This paper presents a one shot approach for auditing privacy. Their approach is as follows: given n independent input samples and a (dp) algorithm to audit, they divide the $n$ input samples into two groups $X$ and $Y$ of size $m$ and $n-m$ respectively. Then they randomly select a partition of the first part $(X_1,X_2)$, and only use $X_1 \cup Y$ as the input of the dp algorithm. After that, they choose / design an application dependent score function that helps decide if each member of $X$ appeared in the input of the algorithm or not. The accuracy of this decision function implies a lower bound on the privacy of the algorithm.
Their main technical observation is that if $X$ is partitioned through a poisson sampling procedure, then running the auditing algorithm for one run is as good having $m$ independent runs. Their algorithm works both in the case where the final result of the algorithm is released and in the case where the intermediate steps are also released. In the end they decide whether the algorithm could be $(\varepsilon, \delta)$-dp or not by observing that the number correct guesses has to be less than $r \cdot e^\varepsilon / (e^\varepsilon+1) + O(\sqrt{r})$ with high probability, where $r< m$ is the total number of guesses.
Strengths: The experiments for gradient space attack suggest that their lower bounds are within constant factor of the theoretical upper bounds, in both the case where the adversary has access to all of the model iterates and in the case where the adversary only has access to the final model.
The review of related work is very good and well written.
Their empirical lower bounds converge to analytical bounds in some cases if delta is sufficiently small and the number of examples goes to infinity.
Weaknesses: Their approach does not seem to extend well to the case where delta is not very small.
In the setting where the adversary only can audit the input space (as opposed to the gradient space), their results do not seem to be very strong. This is important because in practice the algorithm has to be private with respect to the input space and not the gradient space.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In line 51 to 57, the authors mention that such results were previously unattainable in the setting where only one model could be trained. On a conceptual level, what do you think was the roadblock that previous work could not overcome and is overcome in this work?
It's not clear to me in the experiments whether the "meaningful" metric to consider is $n-m$ or $n / m$.
There's no other auditing baseline presented to compare to in Figure 6. How do your results compare to previous work?
Your empirical results suggest that the lower bounds converge to analytical bounds in some cases if delta is sufficiently small and the number of examples go to infinity. Is there an analytical explanation for this?
Something that's a bit confusing to me is that in Figure 10, the empirical lower bound on epsilon decays after some point. Can you explain why this is happening?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have discussed the limitations of their work adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and comments. We respond below.
**Not small delta:** Our approach works well for reasonable values of delta. E.g. delta=10^-3 in Figures 1 & 2. Handling larger delta is difficult even with multiple runs.
**Conceptual roadblock:** The fact that multiple examples was previously considered as an unproven heuristic shows that people had the right intuition. The high-level key to analyzing this seems to be framing it in terms of generalization. The DP=>generalization literature provides the right toolkit to analyze multiple examples, but we still needed to do a lot of technical analysis to obtain tight results.
**Comparisons for Figure 6:** Please note that none of the previous works can provide guarantees for the single run setting that we are considering in this work, as a result there are no previous works that we can directly compare to. To attempt a comparison to prior work, we tried auditing the Gaussian mechanism with a single example and multiple runs. To obtain results comparable to Figure 10 (i.e. eps>=2.8 when the sensitivity and standard deviation are equal), we require at least 100 runs using prior methods in comparason to a single run using our method.
**Why n-m:** The main point that we want to mention in these experiments is that by increasing the number of non auditing examples the auditing becomes harder. Therefore, it doesn’t make much difference if we use m/n or n-m. Moreover, machine learning models have limited capacity and the number of examples seems to be more important than the ratio. For example as we see in Figure 7 by increasing the number of auditing example significantly we see worse auditing performance.
**Convergence of lower bound to upper bound:** Figure 9 shows that our empirical lower bound does converge to the theoretical upper bound as the number of examples increases and accuracy is held constant. This simply demonstrates that our main theorem is tight. The gap between the bounds is due to statistical uncertainty, which decreases as we add more samples.
**Lower bound decays in Figure 10:** This simulation shows the effect of abstentions. I.e., abstaining on some examples allows us to focus on the clearest examples, which results in higher accuracy. In Figure 10 we increase the number of guesses by making fewer abstentions, which results in lower accuracy. (In contrast, the accuracy in Figure 9 remains constant as we add more guesses.)
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and answering my questions.
After having read the comments of the other reviewers, the responses of the authors, and parts of the appendix, I have updated my review. Overall, I think this is a good and very well written paper that deserves to be in NeurIPS. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies the problem of auditing differentially private machine learning systems. They propose a procedure which does so in one training run -- the key is the ability to include/exclude multiple data items in the run, as well as a novel analysis via leveraging connections between DP and generalization. They conduct experiments auditing DP-SGD obtaining meaningful empirical privacy lower bounds.
Strengths: 1. The topic of the paper is important and timely. As the authors put, a privacy audit allows to assess the tightness of mathematical analysis and detect errors in analysis and implementation. As differentially private machine learning systems are being deployed increasingly, the research has potential to be impactful in these areas.
2. The authors make interesting contributions, with strong theoretical underpinnings which directly lead to improved results in practice. In my limited experience, such results aren't are rather rare and I greatly enjoyed this aspect of the paper. Moreover, the idea of membership inference with multiple inclusion/exclusion of data items, has been used as a heuristic in the past, so this paper gives a mathematical justification for the heuristic.
3. The paper is very well-written -- it is insightful and to the point. The paper also contains an extended discussion (Section 6) on the limitation of this work (and related approaches).
Weaknesses: 1. In some parts of the paper, I found the writing to be too dense. This is especially true for Section 6. I encourage the authors to revise this part.
2. While the authors provide extensive experimental results testing various aspects of the procedure, I found the section to be poorly organized. I encourage the authors to provide an overview of what is to come in the beginning of the experiments section.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Why the particular choice of score function for white-box attacks? Did the authors try other choices before settling on this?
2. What is the intuition behind allowing abstention?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I greatly appreciate that the authors provided an extensive discussion of the limitations. However, as suggested above, the writing is too dense in these parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time in reading and reviewing our submission!
**Writing:** We will edit the paper, and especially Section 6 for clarity.
**Why this score function:** This is the same score function as used in the prior work, previous works showed it achieves tight results. Our approach works with any score function and it is an interesting direction for further work to devise new score functions.
**Why abstention:** As shown by Carlini et al. (2022), membership inference attacks frequently perform much better on some “hard” examples than on others. Abstaining allows us to focus on only those “hard” examples which are easiest to perform membership inference on. In addition, the hardness of the examples also has an element of chance due to the noise added for privacy; any given example may be harder or easier depending on how the noise happens to affect its score.
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: I thank the authors for providing clarifications. I am still positive about the paper and will maintain my score! | null | null | null | null | null | null |
Iterative Reachability Estimation for Safe Reinforcement Learning | Accept (poster) | Summary: This paper proposed an iterative reachability estimation method for safe RL. The reachability is estimated by the probability of future trajectories entering unsafe state sets. Compare to previous reachability-based methods, the proposed method could handle stochastic dynamics and also improved the performance with deterministic dynamics. The proposed algorithm also leverages more information from data which could explain the performance improvement compared to existing methods. Theoretical convergence results are provided. Experimental results well supported the claimed performance improvement and safety guarantees.
This paper has removed one significant limitation, the deterministic dynamics assumption in previous studies. Removing this limitation should be very careful, and I think the authors did it very well. The author also connected HJ reachability with CMDP, which is two important definitions in the safe RL. Therefore, I strongly suggested the paper be accepted.
Strengths: Originality:
Very good. As far as I now, no paper has considered stochastic reachability with the constrained RL setup. The authors also did a good job of connecting HJ reachability and CMDP, which are two important definitions in constrained RL that used to be separately considered. This combination also removed significant limitations, the deterministic dynamics of the previous study.
Quality:
Excellent. The paper very clearly explained the relation and improvement with respect to the previous paper in both deterministic and stochastic settings and did a comprehensive comparison in the experimental section. The authors also summarized the novelty and advantages of intuitions which is very easy to understand, i.e., previous methods only consider the maximum violation, which might lose information in the whole episode.
Clarity:
The presentation of this paper is good.
Significance:
The safety of stochastic systems is very important and challenging. There have been many theories and studies to formulate the problem, and reachability is undoubtedly one of the most powerful methods. Handling stochastic reachability should be very careful.
Weaknesses: I feel good about most of the paper, I only have comments on some minor problems:
1. The problem formulation, equation (4) should be emphasized better so that the reader will know this is the proposed problem formulation.
2. The notation system is a bit messy. The readers might get lost easily, especially those not familiar with the previous paper.
3. Algorithm 1 actually did not provide too much useful information. You should improve it to highlight the differences between your algorithm and the previous ones, like the REF update.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I like most of the intuitions to explain the advantages of this paper. However, I have some questions:
1. (line 179 - 181, about the claimed limitations of previous work RCRL) _**These improvements in costs can be crucial in guiding the optimization toward a safer policy.** And optimizing with $V_h(s) $ can result in accumulating an unlimited number of violations smaller than the maximum violation._
I roughly got the intuition. I think you mean that some policies might be too far away from the safe policy. The optimization landscape looks like $V_h(s)$ should be non-decreasing in the first few steps but finally decrease to a low value. This is indeed interesting. Because from my intuition, a good policy update should always point to the direction that $V_h(s)$ decreases.
Did you observe this in the experiment? If so, could you show me this phenomenon with some experimental results? It would be super helpful for enhancing the contribution of this paper.
2. (Section 5.3, about the REF and multiplier.) The REF update actually relies on the distribution density function. It seems like you did not include this paper in the references,
> Qin, Zengyi, Yuxiao Chen, and Chuchu Fan. "Density constrained reinforcement learning." International Conference on Machine Learning. PMLR, 2021.
I think it is a good paper to further understand the relationship between density function and Lagrange multipliers in safe RL problems. You should discuss the relationship with this paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and the detailed suggestions.
>The problem formulation, equation (4) should be emphasized better... The notation system is a bit messy ... You should improve it to highlight the differences between your algorithm and the previous ones
Thank you, we will further emphasize the equation, clarify the notations, and include the full form of the gradient of the losses in the algorithm.
> The optimization landscape looks like $V_h$ should be non-decreasing in the first few steps but finally decrease to a low value. … Did you observe this in the experiment? If so, could you show me this phenomenon with some experimental results?
Yes, similar to what the reviewer suggests. In the quote, we are describing that since RCRL just computes maximum cost violation, any changes in cost later on in the trajectory that remain lower than the maximum cost will not affect the RCRL value. However, with cumulative sum of costs as used in RESPO, we can gain more learning signal since any change in costs along the trajectory will affect the cumulative sum.
This phenomenon explains, in the case when the agent is outside the feasible set, why RCRL tends to remain outside the feasible set and the RCRL optimization may not point to the direction that $V_h$ strictly decreases. However, our approach, RESPO, tends to (re)enter the feasible set.
For a visual on experimental results for comparing this property of cumulative costs compared to maximum reachable cost, we implemented a Double Integrator example and compared the trajectories between RESPO and RCRL in Appendix Section 4.4. It demonstrates how when the agent begins outside the feasible set, the _trajectory from RCRL remains outside the feasible set_ while the _trajectory from RESPO enters back into the feasible set_. For more information on this behavior of (re)entrance into the feasible set, we also refer the reviewer to Proposition 2 (line 213 of main paper) and its proof in Appendix Section 3.3.
> The REF update actually relies on the distribution density function. It seems like you did not include this paper in the references
Thank you, we will include it in the related work. Our REF estimates likelihood of future violations and is computed through reachability bellman formulation using the max operator. On the other hand, [Qin et al.]’s density metric computes state visitation density and is defined by the discounted sum of the likelihood of visiting a particular state. We will add more discussion on the relationship and differences between RESPO and [Qin et al.] in our related works.
[Qin et al.] Density constrained reinforcement learning. In ICML. PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. I am now very confident that it is a good paper and the experimental results are convincing and comprehensive. | Summary: This paper presents Reachability Estimation for Safe Policy Optimization (RESPO), for safety-constrained RL in general stochastic settings. The authors extend the previous RCRL approach into stochastic settings and push the agent to (re)enter the feasible region. They formulate a safe RL problem with REF and further develop an adapted AC algorithm to solve it, with convergence analysis. They compare their approach against CMDP-based approaches with a soft constraint and RCRL with a hard constraint, showing the advantages.
Strengths: The paper is well-motivated and well-structured to follow. It studies a critical problem.
Based on RCRL, this paper does have some novelties in problem formulation and proposed approach. Technically, this paper is sound to me, although I only checked part of the math proof in the appendix.
The experiments are promising as they show mixed performance and safety violation improvements.
Weaknesses: 1. The writing could be further improved, especially the comparison with RCRL. The reviewer acknowledges that there is some explanation of the difference between the proposed approach and RCRL, still, it would be much better to add more and clarify it. For example, the reviewer is confused why RCRL cannot guarantee or optimize (re)entrance to the feasible set. Couldn't use the same proof of Proposition 2 to obtain the same (re)entrance proposition?
2. For the deterministic environments, my understanding is RCRL considers a harder constraint as it is per state constraint than the discounted additive constraints in your paper, why the harder constraint cannot optimize/guarantee (re)entrance to the feasible set?
3. How does the reentrance proved from the deterministic environment applied to stochastic systems?
3. There are some recent safe RL papers considering hard constraints. For example,
a. Wang, Y., Zhan, S. S., Jiao, R., Wang, Z., Jin, W., Yang, Z., ... & Zhu, Q. (2022). Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments. ICML 2023.
b. Xiong, N. (2023). Provably Safe Reinforcement Learning with Step-wise Violation Constraints. arXiv preprint arXiv:2302.06064.
The authors may consider talking about these recent references in the paper revision.
4. what do you mean by "almost surely" in the convergence analysis?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: What are the limitations of RESPO?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and detailed suggestions.
> The writing could be further improved, especially the comparison with RCRL. The reviewer acknowledges that there is some explanation of the difference between the proposed approach and RCRL, still, it would be much better to add more and clarify it.
We will make further explanations along these lines. In lines 170-185, we listed several specific problems in RCRL that we aim to address, namely: 1) RCRL is limited to the deterministic setting and 2) RCRL cannot guarantee (re)entrance into the feasible set. In particular, RCRL is limited to deterministic settings because its reachability value function in the Bellman formulation does not directly apply to the stochastic setting.
>Why RCRL cannot guarantee or optimize (re)entrance to the feasible set. Couldn't use the same proof of Proposition 2 to obtain the same (re)entrance proposition?
RCRL cannot guarantee (re)entrance because its reachability value function permits having many (possibly infinite) violations less than or equal to the maximum violation. For a small example, consider the agent faced with two possible trajectories: trajectory A which has a violation sequence (100, 0, 100) and trajectory B which has a violation sequence (100, 0, 0). The reachability value function would assign value 100 to both trajectory A and B since 100 is the maximum violation. So a reachability value approach like RCRL would choose both paths with equal likelihood and not be guaranteed to enter the feasible set. However, by considering the cumulative violations, trajectory A will have a greater cost score than B. Therefore, an approach using cumulative cost like RESPO will optimally choose trajectory B and thereby enter the feasible set.
We proposed to utilize reachability to instead measure likelihood of violation and to use (discounted) sum of costs along a trajectory to minimize cumulative violations. As Proposition 2 indicates, the (discounted) sum of costs can guarantee (re)entrance into the feasible set whenever possible by not allowing an uncountable number of violations smaller than (or equal to) the maximum violation in the trajectory. For details on the proof, please refer to Appendix section 3.3. For experimental support for this proposition, please refer to the Double Integrator example in Appendix section 4.4 which considers the behavior under RCRL versus RESPO when the agent starts inside the safe set but outside the feasible set.
>For the deterministic environments, my understanding is RCRL considers a harder constraint as it is per state constraint than the discounted additive constraints in your paper, why the harder constraint cannot optimize/guarantee (re)entrance to the feasible set?
A hard, state-wise constraint approach doesn’t necessarily guarantee entrance into the feasible set when the agent begins _outside the feasible set_. RCRL is unable to (re)enter the feasible set because of the formulation of its reachability value function. Specifically, because it only minimizes the maximum violation along a trajectory, it does not consider the possibility of many (potentially infinite) violations smaller than or equal to the maximum that prevent it from entering the feasible set.
> How does the reentrance proved from the deterministic environment applied to stochastic systems?
The principle of reentrance using cumulative costs can be extended to stochastic systems. The basic idea we are providing in Proposition 2 is that if there exists a way to (re)enter the feasible set, this path would have a finite cumulative cost and all trajectories that do not enter the feasible set will have infinite cost (for gamma close to 1 and very large horizons), therefore the optimization will produce a policy with minimal (i.e. finite) cost which reenters the feasible set. This principle will also be reflected in stochastic systems (consider that even if the chance of ending up in a trajectory with infinite cost is small but nonzero, the expected cumulative cost will still be infinite), which justifies the usage of a value function based on cumulative cost rather than maximum cost along a trajectory.
> There are some recent safe RL papers considering hard constraints. For example, [Wang et al.] and [Xiong et al.]
Thank you, we will add these papers in our related works section. Both these papers are model-based while our proposed algorithm falls in the category of model-free safe RL.
> what do you mean by "almost surely" in the convergence analysis?
By almost surely, we are using the probability theory definition that the likelihood of the event (i.e. convergence to local optimum) is 1. This definition has been used in past works including [Chow et al.] and [Borkar].
> What are the limitations of RESPO?
We refer the reviewer to main paper lines 358-360. The main limitation is that we do not guarantee minimal violations _during_ training. We leave open potential extensions that can help in applications like safety in single-lifetime reinforcement learning.
[Chow et al.] Risk-constrained reinforcement learning with percentile risk criteria. JMLR, 2017.
[Borkar] Stochastic approximation: a dynamical systems viewpoint, volume 48. Springer, 2009.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttals. The responses have fully addressed my questions and I am willing to increase my score to a 7. | Summary: Previous approaches to safe reinforcement learning used the constrained MDP formulation where there is a constraint imposed on the cumulative sum of costs to minimise violations. This framework is not applicable very easily where there is a need for hard constraint satisfaction. The previous approach (RCRL) which leverages reachability analysis to strictly satisfy hard constraints is limited to deterministic MDP and is not suited to bringing the state back to the feasible set when already outside the feasible set. In this work, the authors minimize the expected chance and frequency of violations under stochastic transition dynamics thus resolving the two problems with the previous work. The order in which to update the Q-networks, the policy networks, the Lagrangian dual factors and the reachability estimation function is studied using an empirical approach and a theoretical convergence guarantee to a local optimum is provided for this alternating optimization. Empirical comparisons are made extensively to a wide spectrum of existing approaches.
Strengths: Advantages of paper in relation to deterministic dynamics assumption of RCRL is clear and has merit. Empirical comparisons to previous work are quite extensive with additional explanations in the appendix. Convergence to local optimum is presented to establish rigor and soundness of the method.
Weaknesses: 1) In section 6, it was not very clear what to see in the figures and it looked like RESPO was achieving a different point in the trade-off curve compared to the other methods. More information was available in the appendix. More discussion on how to compare the different methods and why one method performs better in a specific metric can be written out in the main section. The motivation of the paper provides the twin advantages of getting back into the feasible set and accounting for stochasticity. The first advantage is seen in the double integrator example in the appendix. Do any of the previous methods suffer due to deterministic assumptions and is the actual MDP stochastic?
2) Since there are many networks and parameters updated at the same time, the robustness and reproducibility of the training process for this method and similar previous methods seem suspect. The authors have discussed this aspect in the ablation studies and the best possible convergence is obtained in the way the authors are doing the training. This insight, though a mild weakness, could benefit the community as we are inferring new insights about alternating optimization between multiple networks tied to each other.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) While doing the comparison to CBF methods, it is not clearly what is being compared against. CBFs/Energy functions are usually handcrafted but there is CBVF (cited by the paper) and other related work that tries to construct the optimal CBF with maximum safety set (assuming known model). With the optimal CBF, the behaviour of CBF methods is not very conservative. If stochasticity is the source of trouble for the CBF methods, some extensions such as robust CBF and learned CBF are available. Here, the authors can provide more details about which exact CBF method they are using.
2) In line 155, should $V_c$ be in the expression for optimal policy? $V_c$ is previously defined differently. “We define optimal REF based on an optimally safe policy π∗” is then misleading. The policy is optimizing a different loss in line 227 and it is not clear whether we actually get the optimal REF from this process.
3) Assumption A3 (Lipschitz gradients) is a rather strong assumption. Are there previous cases where RL value functions are assumed to have Lipschitz gradients?
4) In figure 3, why are RCRL and FAC violating the hard constraint if they are designed to respect hard constraints?
5) The variance of the red RESPO learning curves seem to be high in certain figures indicating training instabilities
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and detailed suggestions.
> it looked like RESPO was achieving a different point in the trade-off curve compared to the other methods.
Our approach consistently achieves higher rewards and lower costs compared to the other safety baselines. Particularly, RESPO almost always achieves the highest or second highest reward performance among the safety baselines with low cumulative costs. When RESPO achieves the second highest, the algorithm with the highest (RCRL or PPOLag) always incurs several times more cost, generally beyond the acceptable threshold for that environment. The exception to this is the Reacher environment, but still RESPO has similar performance as other performance-successful algorithms and significantly lower cost than them. We will add more discussion on this in the main paper.
> Do any of the previous methods suffer due to deterministic assumptions and is the actual MDP stochastic?
We ensured environments running with the PyBullet engine, namely DroneCircle and BallRun, were stochastic by adding some noise. We will explain more details on this in the paper. Note that the policies learned were stochastic (i.e. $\pi(a|s)$). We can see poor performance (i.e. either low rewards and low costs or high rewards and high costs) of algorithms like RCRL that assume deterministic environments.
> This insight, though a mild weakness, could benefit the community as we are inferring new insights about alternating optimization between multiple networks tied to each other.
As the reviewer noted earlier, we demonstrated via 1) ablation studies in figure 5 on varying learning rates as well as 2) the proof in Appendix section 3.4 that having a particular order of learning rate schedules for the networks satisfying Assumption A1 (main paper line 265) guarantees convergence to the local optimum of our RESPO optimization.
> If stochasticity is the source of trouble for the CBF methods, some extensions such as robust CBF and learned CBF are available.
As per the reviewer's suggestion, we have run additional experiments using model-free learned CBF certificate method found in [Yang et al] and results can be seen in pdf attached to global response. We see that compared to our approach and other baselines, the learned CBF produces conservative behavior (low reward and lost cost) likely because it is difficult to obtain the optimal CBF.
> the authors can provide more details about which exact CBF method they are using.
More details about the CBF method can be found in the Appendix lines 290-292. Particularly, the constraint is $(c’ - c)/dt + \nu \cdot c \leq 0$ where $c$ and $c’$ are consecutive cost values in a trajectory and $dt$ is the time step. This approach was also used as the CBF baseline in the RCRL paper.
>In line 155, should $V\_c$ be in the expression for optimal policy? ... The policy is optimizing a different loss in line 227 and it is not clear whether we actually get the optimal REF from this process.
Yes, $V_c$ should be in the expression. The RESPO optimization we proposed in line 227 will optimize for reward (i.e. $V^{\pi}$) while maintaining cost $V^{\pi}\_{c} =0$ if the state is in the feasible set and minimize the cost $V^{\pi}\_{c}$ if the state is in the infeasible set. Notice, this produces similar safety as $\pi^* = \arg \min\_{\pi} V^{\pi}\_c (s)$ : If $ V^{\pi} \_c (s)=0 $ then the state is in the feasible set and the RESPO optimization maximizes reward $V^{\pi}$ while maintaining the cost constraint of 0 violations. In the other case, $\min\_{\pi} V^{\pi}\_c(s)$ and RESPO behavior both minimize the cumulative cost. Since in both cases RESPO ensures $V^{\pi}\_c(s)$ is minimum, therefore the optimal policy of RESPO is actually an optimal policy of $\min\_{\pi} V^{\pi}\_c (s)$. And hence, we can define the optimal REF from RESPO “based on __an__ optimally safe policy $\pi^∗$.”
> Are there previous cases where RL value functions are assumed to have Lipschitz gradients?
This assumption has been made in the past in several papers including [Chow et al] and [Yu et al]. Note we only use linear, tanh, softplus, and sigmoid activation functions which have Lipschitz continuous gradients.
> In figure 3, why are RCRL and FAC violating the hard constraint if they are designed to respect hard constraints?
Approaches like FAC and RCRL are unable to converge to a suitable policy that manages the importance of the constraints because they treat the soft constraint and both the hard constraints all with the same priority. However, our approach can prioritize the constraints within its optimization formulation (more details on how we do it are in appendix lines 423-435). Particularly RESPO never violates the primary constraint of wall avoidance and it only rarely violates the secondary hard constraint of closeness of the drones. Please refer to Section 4.8 of the appendix for more details on the optimization formulation for RCRL and FAC (particularly line 436).
>The variance of the red RESPO learning curves seem to be high in certain figures indicating training instabilities.
RESPO actually demonstrates smaller instabilities compared to other baseline algorithms near the end of training. While in some environments there are instabilities in the beginning of training of RESPO, our experiments show that RESPO generally converges to a consensus later in training. The only environment where the variance is relatively high at the end of training is DroneCircle—but even still, the baseline approaches with decent reward performance in DroneCircle, like PPOLag, P3O, FAC, and CRPO, have an even higher variance than RESPO.
[Yang et al.] Model-free safe reinforcement learning through Neural Barrier Certificate. IEEE Robotics and Automation Letters, 2023.
[Chow et al.] Risk-constrained reinforcement learning with percentile risk criteria. JMLR, 2017.
[Yu et al.] Reachability constrained reinforcement learning. In ICML. PMLR, 2022.
---
Rebuttal Comment 1.1:
Title: Good job
Comment: I believe this is a strong paper. The authors have addressed all questions and provided clarifications. I have read the other reviews and comments. Overall, I am happy to increase my score to 7 at this time. I will stay tuned to see whether there are any further questions from other reviewers. | Summary: The paper proposes a new algorithm that may handle hard and soft constraints, in which the policy optimization and Hamilton-Jacobi reachability are leveraged to ensure safety. Moreover, experiment results on safety gym, safety PyBullet, and safety MuJoCo also show the good performance of their algorithm.
Strengths: 1. The convergency analysis sounds good.
2. Comprehensive experiments are provided.
3. Hard constraints and soft constraints are investigated.
Weaknesses: 1. Paper writing quality needs to be improved a lot, I am confused about the paper notation, e.g., V_h and V_c.
2. The experimental results are not correct regarding some baselines, especially for CRPO, in CRPO paper, the algorithm presents better performance than PPO-Lagrangian and CPO.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Could you analyze the difference between RESPO and CRPO regarding the reward update and cost update?
2. If the agent does not find the safe action, will it be stuck at a point?
3. How do you define the reachable set when considering reward performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. The balance between reward and cost is not addressed well, in the experiments, as shown in Figure 3, although RESPO can ensure safety, the trajectory is longer than other baselines, and also not smooth.
2. Some related papers are not mentioned in the study, e.g., [Kochdumper, N., et al., 2023], [Gu, S., et al., 2022] and [Selim, M., et al., 2022].
[Kochdumper, N., et al., 2023] Kochdumper, N., Krasowski, H., Wang, X., Bak, S., & Althoff, M. (2023). Provably safe reinforcement learning via action projection using reachability analysis and polynomial zonotopes. IEEE Open Journal of Control Systems, 2, 79-92.
[Gu, S., et al., 2022] Gu, S., Chen, G., Zhang, L., Hou, J., Hu, Y., & Knoll, A. (2022). Constrained reinforcement learning for vehicle motion planning with topological reachability analysis. Robotics, 11(4), 81.
[Selim, M., et al., 2022] Selim, M., Alanwar, A., Kousik, S., Gao, G., Pavone, M., & Johansson, K. H. (2022). Safe reinforcement learning using black-box reachability analysis. IEEE Robotics and Automation Letters, 7(4), 10665-10672.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and detailed suggestions.
>Paper writing quality needs to be improved a lot, I am confused about the paper notation, e.g., $V\_h$ and $V\_c$.
We have provided the definitions of the notations in lines 108-165 of the main paper, as well as a summary of notation in Table 1 of the appendix. We will further revise the notation to be more clear. $V_h$ is the reachability value function defined in Definition 2 (line 145 of main paper) which describes the maximum reachable violation. $V_c$ is the cumulative discounted sum defined in the CMDP section (line 125 of main paper).
>The experimental results are not correct regarding some baselines, especially for CRPO, in CRPO paper, the algorithm presents better performance than PPO-Lagrangian and CPO.
With due respect, we don't believe reviewer's claim is accurate. The experiments in the CRPO paper [Xu et al.] have not compared with either PPO-Langrangian or CPO. They compare with a different algorithm called PDO and unconstrained TRPO (page 8 of [Xu et al.]). Furthermore, CRPO compares on the Cartpole and Acrobot environments, which are different benchmarks than those presented in our paper. Our implementation of CRPO baseline is based on that found in open-source python library omnisafe. Note that CRPO still inherits the limitations of the CMDP framework, namely it cannot operate in a state-wise hard constraint framework and the cost threshold needs fine-tuning and/or prior knowledge of the environment is needed.
>Could you analyze the difference between RESPO and CRPO regarding the reward update and cost update?
The optimization procedures in RESPO and CRPO use the reward and cost critics differently, though the algorithms' critic updates are similar. Particularly, our contributions are in our novel proposed reachability-based optimization framework which maintains a reward critic, cost critic, and reachability estimation function (REF). Our work addresses issues about guaranteeing as best as possible to maintain hard constraints in a stochastic environment and uses a primal-dual constraint optimization method. On the other hand, CRPO’s contribution is situated within a different optimization framework, namely Constrained Markov Decision Processes (which is not suitable for hard constraints, see CMDP section 3.2 in main paper). They propose a policy optimization technique that is primal based.
>If the agent does not find the safe action, will it be stuck at a point?
No, unless staying stationary is an available safe action. Based on our RESPO optimization, if there is no action so that the agent is currently or in the future safe, that means the current state is outside the feasible set. So, our optimization will be producing a policy that chooses trajectories minimizing the cumulative discounted sum of costs starting from that state. For a visual of an experiment showing how our algorithm might handle this, we refer the reviewer to the Double Integrator example in Appendix Section 4.4 in which the agent starts outside the feasible set.
> How do you define the reachable set when considering reward performance?
In definition 5 (in Section 4.1 of the main paper), we define the feasible set as the set of states from which no violation is reached under a given policy (the reachable violation set will be the complement of the feasible set). RESPO will optimize reward performance under the constraint that the cumulative cost is 0 (i.e. the agent remains in the feasible set) if the agent is in the feasible set. Furthermore it will minimize cumulative cost if the agent is not in the feasible set.
>The balance between reward and cost is not addressed well, in the experiments, as shown in Figure 3, although RESPO can ensure safety, the trajectory is longer than other baselines, and also not smooth.
In lines 318-322 of main paper, we describe how Figure 3 demonstrates RESPO has a desirable balance between reward and cost: it ensures the satisfaction of the various constraints as well as reaches the goal locations closer than the other approaches, therefore maintaining high rewards.
Note that RESPO’s trajectory is actually the shortest ($\sim 75$ steps) compared to the other baselines ($\sim 85$, $80$, and $\sim 120$) as indicated in the bar “Trajectory Step Number” next to each plot in Figure 3.
Additionally, the RESPO trajectory is quite smooth except for a particular point in the upper drone agent, which we explain in lines 320-322 of the main paper is to satisfy the hard constraints. The upper drone provides room to permit only one drone at a time in the tunnel, which is the intended behavior. The other approaches do not create this intended behavior of one drone at a time through the tunnel.
>Some related papers are not mentioned in the study, e.g., [Kochdumper, N., et al., 2023], [Gu, S., et al., 2022] and [Selim, M., et al., 2022].
Thank you, we will add these to our related works discussion. All these three papers rely on having access to, or recreating, the model of the system dynamics to make predictions about future environment rewards and/or response, and are therefore model-based while our approach falls within the category of model-free safe RL.
[Xu et al.] Crpo: A new approach for safe reinforcement learning with convergence guarantee. In ICML. PMLR, 2021. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed suggestions and feedback. Our novel algorithmic contributions in this paper include 1) providing a reachability-based hard constraint satisfaction approach for stochastic and deterministic settings and 2) the ability of (re)entrance into the feasible set within our reachability-based framework. We have evaluated our proposed approach on several benchmarks and compared with various baselines. We also demonstrated our method can handle and prioritize multiple hard and soft constraints. We have provided a proof that our algorithm converges to the local optimum of our proposed optimization.
We have run additional experiments for a model-free learned CBF baseline and compared with our proposed approach and other baselines. We include figures of the results in the attached pdf.
Pdf: /pdf/ab03a801294ed5a10cb68b3f591198e7b5fc0e0f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
HubRouter: Learning Global Routing via Hub Generation and Pin-hub Connection | Accept (poster) | Summary: This paper proposes a two-phase learning framework called HubRouter for global routing in chip design. Different from previous works that directly generate routes from chip images, which potentially cause inconnectivity, this paper proposes to generate hubs representing tiles in the first phase and then construct RMST with hubs by an actor-critic model. The experimental results show the effectiveness in terms of higher correctness, and shorter WL compared with SOTA model.
Strengths:
1. the paper firstly proposes hubs for the global routing task in chip design, transferring the pin-pin problem into hub-pin problems, which can passively avoid the in-connectivity problem when regarding the routing design process as image generation.
2. When dealing with large chips, HubRouter is the best performer among SOTAs regarding both correctness and wire length.
3. The proposed model can have good scalability with GAN or VAE as their generative model for the hub generation phase while maintaining the property of making wire length short and overflow less.
4. In the hub-generation phase, besides hubs, it also generates routes and masks to avoid noises brought by the generative model itself since the noises can greatly impact the results. Especially, the stripe mask can be greatly helpful for complicated cases.
Weaknesses: 1. The overflow still exists for those generated global routing even though the proposed model reduces overflow better than PRNet.
2. Even though HubRouter can get a good performance on replicating the known facts, it does not discuss the quality of the generated routes regarding congestion, and it possibly cannot generate novel routes since it has limited knowledge of routing design.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Q1. It is unclear how the stripe mask is generated for chips.
Q2. Why not define pixels as hubs that only connect two neighbors on the same row or columns, i.e., $r_{(i-1)j}+r_{(i+1)j}+r_{i(j-1)}+r_{i(j+1)}=2$. Without points between hubs defined in the paper, how does the second phase generate the routes in the right direction?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss the limitations of the lack of training data about hub generation, and the model is not an end-to-end model. Still, it also should discuss how the ground truth comes and the potential impact of adopting such methods when generating it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and valuable feedback, as well as your positive comments and interest in our paper. According to your constructive comments, we make some replies to the questions.
> **W1:The overflow still exists for those generated global routing.**
Yes, overflow still exists but HubRouter achieves better overflow performance compared with the SOTA generative global routing method (PRNet). Specifically, as shown in Figure 5, HubRouter (GAN) can achieve an average overflow reduction of about 40% compared to PRNet on the ISPD-98 benchmarks.
> **W2:It does not discuss the quality of the generated routes regarding congestion, and it possibly cannot generate novel routes since it has limited knowledge of routing design.**
The metric of overflow can reflect the congestion of the generated routes and we can see that HubRouter surpasses the SOTA generative global routing method (PRNet) on overflow. Since the routes are learned from the training datasets, HubRouter cannot generate novel routes when given new design rules, but it can indirectly learn the latent design knowledge from the training datasets.
> **Q1:It is unclear how the stripe mask is generated for chips.**
We define the stripe mask in lines 171-172 and one can generate the stripe mask according to the definition when given a route. The corresponding sketch is shown in Figure 4(b), where gray stripes represent the stripe mask.
> **Q2:Why not define pixels as hubs that only connect two neighbors on the same row or columns? Without points between hubs defined in the paper, how does the second phase generate the routes in the right direction?**
HubRouter's aim is not to generate all pixels in a route but to generate the key points in the first phase, as such in the second phase, HubRouter can connect these key points with the pins. If pixels that only connect two neighbors on the same row or column are also regarded as hubs, then the hubs will be the same as all pixels in a route, which cannot correspond to our original purpose.
In the second phase, any two points can be connected under the RSMT connection perspective, and moreover, with the guidance of correctly generated key points (hubs), the routes can be generated as expected.
> **Limitations: It also should discuss how the ground truth comes and the potential impact of adopting such methods when generating it.**
Due to the limited length of the main text, we discussed how the ground truth comes in Appendix D.1.
The potential negative impacts include: 1) the incorrect generation might lead to poor routing results; 2) Generative models require large amounts of computing resources, which might cause a waste of resources. We will incorporate these impacts in the revised version.
Please let us know if you had further questions.
---
Rebuttal Comment 1.1:
Title: Thanks for authors' response
Comment: Thanks for the clarifications. Most of my concerns are addressed. After reading the additional materials and discussions, I think this paper shows good potential to assist chip design in terms of WL, correctness rate, and efficiency. However, as a generative model, it mainly generates routes within the training set knowledge. I would like to keep my current score.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We appreciate your positive feedback and your support for our work. We are glad that our clarifications and additional materials have addressed most of your concerns. Thank you for taking the time to review our work. | Summary: The paper focuses on the generative global routing tasks and mainly ensures the connectivity of generated routes via a two-stage framework. In the first phase, the approach involves a typical generative task, which exploits multi-task learning to promote the generation quality and utilizes a trick called stripe mask to decrease some redundant noise points. In the second phase, the work is formulated as an RSMT construction problem and addresses this problem by REST. The authors show that with correctly generated hubs, the RSMT construction can be solved with less time.
Strengths: + The structure that generates hubs first and then connects them with pins is novel and interesting, and is reasonable to guarantee the connectivity of routes in the global routing.
+ The motivation is clear, and the authors show in Table 2 that the so-called `unconnectivity’ caused by existing generative global routing algorithms (PRNet) is severe, but I would have preferred the authors to also refer to it in the introduction part to strengthen the motivation.
+ The proposed approach performs better than other generative global routing algorithms in several metrics, especially the connectness rate is 100% and the running time of HubRouter (GAN) is much less than PRNet (GAN).
+ The authors also give some applications other than global routing to show the generality of the proposed approach.
Weaknesses: - The approach is clearly divided into two different phases, but the running time shown in the experiment seems to be combined. The authors could show both generation time and connection time to display the time overhead in either phase.
- Some possible typos: The $r_{(n+1)j}$ should be $r_{(m+1)j}$ in Definition 1.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Whether the nets in ISPD98 (Table 3) are routed sequentially? If so, what’s the performance of HubRouter when nets are concurrently routed?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and valuable feedback, as well as your positive comments and interest in our paper. According to your constructive comments, we make some replies to the questions.
> **W1: Displaying time overhead in either phase.**
The time overhead of the two phases is shown in Table 2 in the rebuttal PDF. As can be seen, the connection phase has comparable time costs for the three generative models, which is reasonable. The generation phase, however, dominates the overall time consumption.
> **W2: Some possible typos.**
Thanks again for your meticulous review and we will correct typos in the revised version.
> **Q1: Whether the nets in ISPD98 (Table 3) are routed sequentially? If so, what’s the performance of HubRouter when nets are concurrently routed?**
Yes, the nets are routed sequentially. Such a way of routing can maintain the performance of WL and overflow, but is slower than concurrent routing. The results of concurrent routing are shown in Table 3 and Figure 1 in the rebuttal PDF. The batch size of concurrent routing is set uniformly to 20. As can be seen, the result of concurrent routing is a little worse than the one of sequential routing, but it wins in speed.
Please let us know if you had further questions.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for your rebuttal. I have no more questions and keep my positive score.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thank you for taking the time to read our rebuttal and maintaining your positive score. We greatly appreciate your support of our work. | Summary: This paper investigates the issue of global routing in VLSI systems and introduces HubRouter, a method that initially generates hubs and subsequently connects them to pins. In the first phase, the authors explored different generative models. In the second phase, the authors employs an actor-critic model to generate a final routing.
Strengths: 1. The paper is well-written and easy to follow.
2. The authors conducted many experiments under different baselines.
Weaknesses: 1. The two phases are independent, meaning that the feedback from the second phase does not influence the hub generation, potentially leading to suboptimal results.
2. The authors tried three generative models (VAE, DPM and GAN) in the paper. But the paper lacks clarity on which model should be used in specific situations.
3. The performance improvement achieved is marginal.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In the first phase, the authors introduced three objectives: the hub, route, and mask, with the latter two serving as auxiliary designs. Given that the goal of the route is similar to that of the second phase, it would be worth considering why the authors did not design the two phases as a loop, utilizing the second evaluation result as feedback for the first generation. Furthermore, it would be helpful to understand the differences between the results of the route and the second phase.
Besides, the design of mask is rather hand-craft. Why the threshold is 1/2 in each row and each colum? Why the authors only evaluate the row and column, not the density of a specific rectangle area?
2. In the experiments, it seems that the three generative models (VAE, DPM, and GAN) have their own advantages and disadvantages. For instance, GAN achieves the best WLs on six datasets, DPM has the best WL on the remaining two datasets, and VAE is the fastest. How should one select the model when presented with a new dataset?
3. Considering the absolute value of WL, which is the primary evaluation metric, could the authors explain the real-world benefits of improving WL by 1%?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Overall, I believe that the integration of the generative model and reinforcement learning algorithm could be more elegant, potentially leading to further performance improvements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and valuable feedback. According to your constructive comments, we make some replies to the questions.
> **W1: The two phases are independent, potentially leading to suboptimal result./Q1: Why the authors did not design the two phases as a loop?**
We do not design an end-to-end model mainly because: 1) Some calculations in the two phases are not continuous, which make it non-trivial to incorporate the gradients computing in the backward propagation; 2) The performance of the end-to-end SOTA (PRNet) method is even worse than HubRouter, probabaly due to the complexity of the problem.
We detailedly highlight the reasons in the first question in the global response. Please refer to it for a more complete explanation.
> **W2: The paper lacks clarity on which model should be used in specific situations./Q2: How should one select the model when presented with a new dataset?**
Thanks for your suggestions. We do have tried three generative models in our paper's experiment: VAE, GAN, and DPM.
Here are our added conclusive remarks:
We empirically show that they have different behaviors.
When given a new dataset, we argue that the choice of generative algorithms is not based on the characteristic of the dataset, but the industrial needs. Specifically, if one needs high quality, GAN is the best. If speed is highly required, VAE is the best. If one has abundant resources, DPM can be adopted.
> **W3: The performance improvement achieved is marginal./Q3: Could the authors explain the real-world benefits of improving WL by 1%?**
We respectively disagree that the performance improvement is marginal.
First, the 1% improvement of WL is significant. Actually, the absolute value of WL is a common metric in global routing since it directly shows the performance in each case. Per your comments, it seems that improvement in the absolute value of WL is marginal. However, there exists the theoretical lower bound of this task [1] and it is unfair to judge the improvement on its absolute value. To show this, we compare the relative error in Table 1 in the rebuttal PDF, where the relative error is computed as $(WL-b)/b$. Here, $b$ denotes the theoretical lower bound. As shown in this table, the WL promotion of HubRouter is notable compared with the SOTA generative global routing method (PRNet).
Second, the ensurance of routing connectivity is crucial in global routing. As can be seen in Table 2, PRNet particularly suffers from severe unconnectivity. Especially for the complex case ‘Route-large’, PRNet only maintains a 4% correctness rate, which means that 96% of samples require the time-consuming post-process. On the contrary, HubRouter can still maintain the connectivity (promote from 4% to 100% on ‘Route-large’).
Third, the promotion of inference time is significant and this is a metric vital to industrial applications. As shown in Table 3, HubRouter is on average 12x faster than SOTA generative global routing method (PRNet) on ISPD-98 cases because the time-consuming post-processing to achieve connectivity is not required for HubRouter.
In conclusion, HubRouter's contribution is absolutely not just improving WL by 1%. We argue that the significant improvement on WL, correctness rate, and inference time can substantially boost the efficiency and quality of global routing, which is vital to the chip design.
> **Q1: It would be helpful to understand the differences between the results of the route and the second phase.**
The first phase can generate the route but it suffers from the unconnectivity like PRNet. So, the routes generated in the first phase are only auxiliarily used to guide the hub generation by multi-task learning. In the second phase, with a connection process, the route can be ensured to be connected.
> **Q1: the design of mask is rather hand-craft. Why the threshold is 1/2 in each row and each column?**
We use the majority voting algorithm [2] to determine whether a row/column is a stripe mask based on the pixel classification. This algorithm is commonly applied in machine learning, such as ensemble learning [3]. The criterion is that if more than half of the pixels are classified as 1, the row/column is considered a stripe mask. Although this is a theoretically sound threshold, we acknowledge that it may not be optimal in practice. However, we observe that the pixel ratio of any row/column that is a stripe mask is higher than 0.9, and lower than 0.1 for those that are not. This suggests that the model has learned the feature of the stripe mask well. Therefore, changing the threshold to 0.4 or 0.6 would not affect the results.
> **Q1: Why the authors only evaluate the row and column, not the density of a specific rectangle area?**
We have tested several tricks to promote the generation quality and finally chose the stripe mask. Corresponding reasons are given at the end of Section 3.1.
Please also refer to the global response and let us know if you had further questions.
Reference:
[1] BoxRouter 2.0: Architecture and implementation of a hybrid and robust global router, ICCAD 2007
[2] A theoretical analysis of the application of majority voting to pattern recognition, ICPR 2014
[3] A weighted majority voting ensemble approach for classification, ICSE 2019 | Summary: This paper presents a new two-phase learning approach, called HubRouter, to address the issue of unconnectivity in the generated routes of global routing (GR) tasks in VLSI systems. It has two steps. Firstly, a deep generative model generates a 'hub,' which acts as a key point in the route; then secondly, HubRouter involves an actor-critic model-based RSMT construction module to connect the hubs. This shift from a pin-pin connection to a hub-pin connection method solves the unconnectivity problem in generative approaches. The HubRouter system ensures all generated routes are connected, eliminating the need for time-consuming post-processing. Experimental results show that HubRouter outperforms other state-of-the-art generative global routing models in wirelength, overflow, and time efficiency. It also finds application in RSMT construction and interactive path replanning, demonstrating its versatility.
Strengths: The paper introduces a novel approach, HubRouter, to global routing, proposing a hub generation and hub-pin connection scheme that effectively addresses the challenge of route unconnectivity.
The experimental results show HubRouter outperforming existing generative global routing models in terms of wirelength, overflow, and time efficiency.
The approach is very general. The authors show that the approach can also be applied to RSMT construction and interactive path replanning, demonstrating the versatility of their method.
Weaknesses: The challenge of connectivity problem is unclear. Specifically, it introduces a novel approach to tackling the unconnectivity problem in global routing, fails to establish the significance and relevance of this problem adequately. It is helpful to elaborate more about the difficulty and significance of connectivity problem.
The effectiveness of the second phase is dependent on the quality of the hubs generated in the first phase. If the generative models do not create effective hubs, the entire approach could be compromised.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Overall, this paper solves the unconnectivity problem in the generated routes of global routing (GR) tasks in VLSI systems. The experimental results are promising. The efficiency of the proposed model is verified both theoretically and experimentally. However, the reviewer has two concerns:
1. The challenge of connectivity problem is unclear. It is helpful to elaborate more about the difficulty and significance of connectivity problem.
2. The effectiveness of the second phase is dependent on the quality of the hubs generated in the first phase. If the generative models do not create effective hubs, the entire approach could be compromised.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and valuable feedback. Our replies to the questions are as follows.
> **W1/Q1: The background on the challenge of connectivity is unclear.**
Thanks for your suggestion.
Existing generative global routing methods adopt an end-to-end model and suffer from the connectivity problem. It requires a maze routing post-process procedure to connect all routes when they are not connected, which is rather time-consuming and can cause extra overflow. Our proposed approach ensures connectivity via a novel two-phase routing procedure and reduces time overhead significantly compared to the SOTA PRNet (Cheng et al, NeurIPS 2022).
In particular, Table 2 shows that PRNet suffers from severe unconnectivity. Especially for the complex case 'Route-large', PRNet only maintains a 4% correctness rate, which means that 96% of samples require the time-consuming post-process. This leads to the phenomenon that though the generation time of PRNet in Table 2 is about 2x of HubRouter, the total time (generation + post-process) in Table 3 is about 12x of HubRouter on average.
We fully agree that it would be more helpful if the statement of the difficulty and significance of the connectivity problem were put forward in the introduction part. We will make this revision in the new version.
> **W2/Q2: The effectiveness of the second phase is dependent on the quality of hubs generated in the first phase, which may lead to suboptimal solutions.**
We admit that the two-phase routing model may lead to suboptimal solutions. However, global routing is such a complicated task that previous methods like PRNet, which is an end-to-end router, lead to an even worse solution compared to our HubRouter.
We also inject the stripe mask into the model to discard the wrongly generated hubs to improve the robustness. Moreover, even if some of the correct hubs are not generated, the second phase can also have the chance to connect the correct route through a rectilinear polyline.
Please also refer to the global response and let us know if you had further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for these clarifications. I have no further concerns. | Rebuttal 1:
Rebuttal: Dear Area Chairs and Reviewers,
We appreciate the reviewers’ time, valuable comments, and constructive suggestions. From an overall perspective, we are happy to see that the reviewers approve of the novelty (Co1X, Vi96), originality (Co1X, GN8b), and generality(Co1X, 2ZCa, Vi96) of our approach. In particular, we are grateful for the acknowledgment of the significant improvement compared to the state-of-the-art method (Co1X, Vi96, GN8b), and the reviewers' recognition of our method's contributions to the field (Vi96, GN8b).
Apart from the positive feedback, some concerns from reviewers are in common, so we give the global responses as follows:
> **Q1: Why not adopt an end-to-end model instead of a two-phase model?**
Thanks for your question which is worth for discussion.
First, the SOTA PRNet (Cheng et al, NeurIPS 2022) is an end-to-end approach yet our method notably outperforms it regarding total time cost, wirelength, and overflow quality.
Second, to ensure the connectivity of the whole routing, we design a two-phase routing model which generate hubs and routes. The first phase in our approach includes a discretization process, turning continuous probabilities into determined hubs. These hubs are discrete values and make differentiable learning hardly applicable.
We pursuit the cost-effective end-to-end solver for further work.
> **Q2: The significance of our model.**
First, to our best knowledge, our HubRouter is the first generative model that ensures connectivity in the global routing task. We have conducted several experiments on our approach in comparison with the SOTA PRNet, a generative model that suffers from unconnectivity, which indicates the significance of connectivity in the task.
Second, one of the main contributions of HubRouter is the notable enhancement of inference speed. As shown in Table 3, HubRouter is on average 12x faster than SOTA PRNet. The decline of inference time is meaningful and crucial in the global routing task.
Third, since a theoretical lower bound of this task exists, we further compare the relative error in Table 1 in the rebuttal PDF, where the relative error is computed as $(WL-b)/b$. Here, $b$ denotes the theoretical lower bound. It turns out that the WL promotion of our approach is not marginal compared with the SOTA PRNet.
**A one-page PDF is uploaded that contains corresponding tables and figures in the response.**
In the following, we provide detailed answers. We are glad to give further response for informed evaluation.
Pdf: /pdf/cfa97fbfc2b733fe83279ea9e8d2d454752411ce.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders | Accept (poster) | Summary: The paper presents a new SSL representation learning framework remote sensing and earth observation data. The presented framework combines a contrastive objective with a reconstruction objective working on single or multi-modal inputs i.e. multispectral satellite data and synthetic aperture radar data. Cross-modal learning is done by cross-attention of both individually encoded modalities, which are fused and decoded by one lightweight decoder.
In a wide range of experiments, the authors are able to demonstrate the proposed approach capability to outperform baseline and current SOTA approaches.
Although this work displays rather a novel combination of already known approaches, I think it is interesting given the insightful adaptation of these approaches to the remote sensing and earth observation domain. I really enjoyed reading it. What I really like is the idea to use RPE as presented by the extension of ALiBi towards multispectral 2dim signals allowing to deal with different resolutions of satellite data. This particular characteristic of satellite data is very often neglected.
Strengths: - (S1) The use of RPE with its 2d extension of ALiBi including X-ALiBi. I think this is interesting since it aims to tackle the multi-resolution nature of individual bands of remote sensing data.
- (S2) multi-modal representation is optional i.e. it performs well with only one modality if needed. This is in particular interesting for remote sensing scenarios such as natural disasters, where fast response is important but one of the two satellites is not available but will need a couple of days to fly over the target region.
- (S3) The wide range of experiments including multiple datasets and downstream tasks all able to demonstrate the outperformance of the presented approach.
- (S4) A broad set of ablation studies providing insights about the inter-working and contribution of each component of the proposed method.
Weaknesses: - (W1) see (L1) under limitations.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - (Q1) How would you extend to more than two modalities? I am asking since in remote sensing and earth observation there are very often multiple modalities / sensors available. How would you model the cross-attention?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - (L1) The paper does not show the presented approach being capable of generalizing to other problem domains beyond remote sensing or earth observation. I could imagine that there exist other problem domains where multiple sensors are available. Showing how this approach performs in such a scenario would strengthen this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments on our work. We appreciate you recognizing the strengths of our work: (i) the introduction of X- and 2D- ALiBi, (ii) the optionally multimodal nature of CROMA, (iii) the extensive evaluation across methods, tasks, and datasets, and (iv) the thorough ablation. Although our paper is dense, we appreciate that the paper was really enjoyable to read. We will address two points individually:
**Q**: *How would you model the cross-attention with more than two modalities?*
**A**: Extending CROMA to more than two modalities is very exciting. Let’s imagine we are provided with higher-resolution RGB imagery spatially aligned with Sentinel-1 and 2 data. In this case, we could encode this high-res sample separately with a unimodal encoder, just like CROMA encodes radar and optical samples. Then a multimodal encoder could cross-attend to both sets of optical encodings (high-res and Sentinel-2) and bias the cross-attention matrix based on the distance between patches via X-ALiBi. In practice, this would mean repeating the X-ALiBi bias matrix along the key dimension, i.e., from a shape of (batch_size, heads, queries, keys) to (batch_size, heads, queries, 2*keys). If the patches between modalities do not perfectly align—for example, if the high-res sample used patches that were 16x16x3 at 1m spatial resolution—then we’d have to build another X-ALiBi matrix by calculating the relative distance between each query and key patch and concatenate this high-res bias matrix with the Sentinel-2 bias matrix along the key dimension. If there were 196 high-res patches, then the new X-ALiBi bias matrix would be of shape (batch_size, heads, queries, keys+196). In general, we believe that as long as the relative locations between cross-modal patches are known, then X-ALiBi can be leveraged. For example, imagine we have ground-level imagery with known coordinates. We could bias the cross-attention matrix based on the 2D distance between a ground-level representation and various patches in the satellite image. This idea could be extended to multiple ground-level images for a single satellite image. This research direction excites us—thank you for prompting this discussion.
**Q**: *Validation on other domains with multiple sensors.*
**A**: We agree that validating CROMA on other sensor data would strengthen our work. However, our paper is already quite dense. We look forward to seeing other groups apply CROMA to their applications. The potential for CROMA to be used broadly is one of our main reasons for targeting NeurIPS.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I would like to thank the authors for their time and the level of details provided to address my questions.
In particular, thanks for the in-depth explanation of X-ALiBi's capability to be extended towards multiple modalities, not only coming from different (registered) sensors but also from different resolutions. This renders the presented approach to be more scalable wrt. to input modalities than published work in this area employing straightforward contrastive learning approaches for pre training.
Given that my questions are answered and I think that this submission outlines a very interesting research direction to investigate, I decide to **increase** my rating. | Summary: This paper presents a CROMA, a framework that combines contrastive and reconstruction self-supervised objectives to learn rich unimodal and multi-modal representations. CROMA separately encodes masked-out multispectral optical and synthetic aperture radar samples and performs cross-modal contrastive learning. X- and 2D-ALiBi are also introduced to ensure the performance, which spatially biases the cross and self-attention matrices.
Strengths: CROMA aims to address the multi-model learning problem in the remote sensing (RS) community, which is an important and hot topic. Also, many advanced techniques are adopted and combined properly to ensure the final results for different tasks. In sum, the proposed method is feasible.
Weaknesses: However, due to the poor statements and organization, the main ideas of this work are hard to follow. Also, as mentioned above, CROMA combines some existing techniques to deal with its tasks. Thus, its novelty is limited for NIPS. Some detailed comments can be found in “Questions.”
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. The main contributions are not clear. Many multi-model learning models have been proposed, what are your advantages and own features compared with them?
2. What is FFT? I cannot find its full name in this manuscript.
3. There are three encoders in CROMA. What are the relations between them? What is the rationale behind the freed settings and parameters? How do you decide the input patch sizes?
4. Why do you extend ALiBi to a 2D version? Please explain the necessity.
5. How do you decide the MASK value (i.e., 75%)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: 1. Introduction is chaotic. There is no clear, logical flow between paragraphs, resulting in a disjointed content presentation.
2. A meaningful literature review is missing. The authors only display many published literature. However, the relations between them and CROMA are not clear. Also, the inner relationships of the reviewed literature are confusing.
3. The experimental settings are unclear, preventing the readers from simulating your method.
4. The compared models are not enough, limiting the reliability of the results.
5. The experimental results are discussed as shallow.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments on our work. Please see our replies below:
**Q**: *Clarification of main contributions.*
**A**: Please see our comments to all authors that clarify our contributions.
**Q**: *What is FFT?*
**A**: The term FFT does not appear in our document; perhaps the reviewer is referring to FFN? FFN stands for feedforward network—this is the standard way of referring to the feedforward network in a transformer. We will define the term in our revision.
**Q**: *Clarification on CROMA’s three encoders and choice of patch size.*
**A**: CROMA uses three encoders, (i) a unimodal radar encoder, (ii) a unimodal optical encoder, and (iii) a multimodal (i.e., radar-optical) encoder. The unimodal encoders use a standard ViT-B architecture (other than position encoding) with 8x8 patches. ViTs often use 16x16x3 or 8x8x3 patches (HxWxC); since our optical samples have 12 channels, 16x16x12 patches would lose information when linearly projected to the width of the transformer (i.e., 768 for ViT-B or 1024 for ViT-L). Conversely, selecting patches that are too small (i.e., when the number of pixels per patch is far fewer than the width of the transformer) leads to inefficient models [136]. Thus, an 8x8 patch size is a reasonable choice for CROMA.
The multimodal encoder in CROMA is a transformer that receives radar patch encodings at the bottom of the network and cross-attends to optical patch encodings. This approach is similar to CoCa and the original seq2seq transformer design in “Attention Is All You Need”. This explanation is best understood by consulting Figure 1 in our paper.
Please provide additional clarification on what is meant by “freed settings and parameters.” To be clear, all parameters in CROMA are learned end-to-end.
**Q**: *Why invent 2D-ALiBi?*
**A**: ALiBi (ICLR ‘22) is a SoTA relative position encoding method for transformers modeling 1D inputs. Its primary advantage is it can be trained on sequences of, say, 1024 tokens and make inferences on much longer sequences without needing to be re-trained or finetuned on that longer sequence. Because the sizes of imagery in remote sensing datasets vary considerably, we feel that foundation models would benefit from being able to process images with varying numbers of patches. We show how 2D-ALiBi outperforms a SoTA relative position encoding method for ViTs, called PEG (ICLR ‘23), that leverages a convolution between the 1st and 2nd ViT layers and discards position embeddings that are typically added to patch embeddings at the bottom of the network. Furthermore, a model that can flexibly handle images of different sizes can offer a superior trade-off between compute and performance. To illustrate this, we finetune CROMA-B and SatMAE-B on EuroSat for 50 epochs on **96x96px** images—each model is trained once using the default hyper-parameters of each model. Then, we evaluate each model on the EuroSAT validation set at various resolutions:
| Model | Test Resolution -> | 32x32 | 64x64 | 96x96 | 120x120 | 224x224 |
| -------- | ------- | ------- | ------- | ------- | ------- | ------- |
| SatMAE-B | | 64.1% | 84.2% | 99.1% | 98.3% | 61.4% |
| CROMA-B | | 90.7% | 97.6% | 99.2% | 98.3% | 83.1% |
This demonstrates that CROMA is more robust to differences between train/test-inference image sizes. This robustness permits users to select more compute-efficient image sizes without dramatically sacrificing performance. For instance, if a user requires fast predictions only possible on 32x32px images, CROMA will outperform SatMAE by 26.6% on EuroSAT.
Beyond extrapolating to larger images, 2D-ALiBi outperforms 2D-sinusoidal embeddings and PEG without extrapolation, i.e., testing on the exact resolution as training. We believe this is due to the true relative position encoding of 2D-ALiBi, which is both translation and rotation equivariant. These properties complement data in the remote sensing domain that is overhead imagery whose representations should be translation and rotation invariant.
**Q**: *Why a 75% mask ratio?*
**A**: We select our hyperparameters by considering both performance and cost. As shown in Table 5, a mask ratio of 50% outperforms a mask ratio of 75% by 0.1%, averaged across 6 evaluations. But a 50% mask ratio is 1.8x slower to train due to the encoders processing more patches. This highlights a benefit of CROMA—it is robust to the choice of mask ratio.
**Q**: *Experimental settings.*
**A**: We provide experimental conditions in our appendix, as we focused on areas of most interest to a NeurIPS audience in our main text. We also anonymously share all code, pretrained models, and preprocessed datasets. Please see the appendix.
**Q**: *Model comparisons.*
**A**: We compare CROMA to all relevant foundation models in our application. Furthermore, some of our ablations are equivalent to other SoTA methods that have yet to be explored for remote sensing. For example, only leveraging contrastive learning amounts to adapting Fast Language Image Pretraining [124] (FLIP, published at CVPR ‘23) to our domain, as FLIP performs cross-modal contrastive learning with masked-out samples. An ablation in our appendix with VICReg and a patch-wise invariance loss objective is equivalent to adapting VICRegL [143] (NeurIPS ‘22) to our domain. In our revision, we will clarify the connection between these ablations and other SoTA computer vision methods not yet explored in remote sensing.
**Q**: *Discussion of experimental results.*
**A**: We agree that our submission would benefit from a more detailed discussion of the results. Our main text focused on using our space to validate CROMA as thoroughly as possible experimentally—hence the probing, kNN, K-means, and segmentation experiments. If our paper is accepted, we will have more room to incorporate a deeper discussion in a revised version.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thanks the authors for their reply. Although parts of the issues have been modified, the novelty of this work is limited to NIPS. In addition to the poor organization and written, the contributions of this manuscript is narrowed. Thus, I insist on my original decision, i.e., Reject. | Summary: This paper proposes CROMA to align optical and SAR modal images via contrastive learning and reconstruction. Comprehensive experiments on three datasets have demonstrated the effectiveness of CROMA.
Strengths: 1. This paper introduces a multi-modal representation using contrastive learning and reconstruction.
2. The proposed CROMA has exceeded the SatMAE, which only used unimodal images.
3. CROMA is more faster and effective than SatMAE.
Weaknesses: 1. CROMA only constructs pos-neg samples from different modal images. Why are image patches in different regions of the same modality not used as negative samples?
2. There miss lots of important details. For example, the number of positive and negative samples is not discussed.
3. How is the sampling ratio of positive and negative samples affected? What is the relationship between positive and negative sample sampling and effective inputs in reconstruction?
4. I am worried about the theoretical innovation of the paper. The paper mainly focuses on the application of contrastive learning loss and reconstruction loss to remote sensing multimodal modeling, which is also very common in medical multimodal and multitemporal. As an extension of SatMAE, I don't know whether the innovation of the paper is enough for NeurIPS. Because there already exists many related papers for remote sensing[1][2].
[1] Ayush K, Uzkent B, Meng C, et al. Geography-aware self-supervised learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 10181-10190.
[2] Manas O, Lacoste A, Giró-i-Nieto X, et al. Seasonal contrast: Unsupervised pre-training from uncurated remote sensing data[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 9414-9423.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments on our work. We appreciate the recognition of the strengths of our work: the introduction of learning multimodal representations by jointly leveraging reconstruction and contrastive learning and the superiority over the current SoTA. We believe we can answer the concerns raised, but we’d like some clarification on one question. Please see our replies below:
**Q**: *Why not use samples from the same modality as negatives in contrastive learning?*
**A**: Please see our response to reviewer iZ73, "**Q**: *Negative optical samples in contrastive learning?*"
**Q**: *Clarification on the number of positives and negatives.*
**A**: We believe that the paragraph starting at L203 discusses this, including the equation below L207—but we acknowledge this could have been more clear. We use the most popular contrastive learning framework in the literature; this uses one positive sample (in our case, matching radar-optical representations), and negatives are all other samples from the batch (in our case, mismatching radar-optical representations)—SimCLR [99] best exemplifies this method. For CROMA-B, we use a batch size of 7200 (the largest that can fit onto a DGX server consisting of 640 GB of VRAM using bfloat16 precision); this means we use 7199 negatives. If our paper is accepted, we will have more space (with a 10th page) to clarify this in our main text.
**Q**: *“How is the sampling ratio of positive and negative samples affected? What is the relationship between positive and negative sample sampling and effective inputs in reconstruction?”*
**A**: We are not sure what is meant by these questions and would appreciate some clarification.
**Q**: *Clarification on the novelty of our CROMA self-supervised learning algorithm.*
**A**: We agree that the main innovation of our paper is combining reconstruction and contrastive losses for learning multimodal representations. Since these two objectives learn different types of representations, we believed combining them would learn more general representations than alone. And we demonstrate this experimentally in our ablations. We have outlined the related work in our paper, but admittedly, it was dense—if our paper is accepted, we will have an extra page of space to further clarify how CROMA differs from related methods. The vast majority of work that leverages reconstruction and contrastive learning to learn joint multimodal representations is in the image-text domain. For example, ALBEF [107] (NeurIPS ‘21), TCL [108] (CVPR ‘22), BLIP [110] (ICML ‘22), MaMMUT [111] (TMLR ‘23) and CoCa [49] (TMLR ‘22)—these were all clever iterations on prior work. In general, these frameworks independently encode full images (no masking) and masked text, then perform a contrastive objective between matching cross-modal samples. Next, they reconstruct the masked text (either with a masked language modeling or autoregressive language modeling objective) conditional on image representations. But none of these approaches were designed for 2D multimodal data. We are primarily inspired by CoCa and significantly adapt it to 2D multimodal data. The simplest way to adapt these frameworks to 2D data would have been to replace the text encoder with another ViT—but encoding mask tokens in a ViT hurts performance and is much slower than hiding patches and leveraging a decoder to predict the hidden patches (this is the main innovation of MAE [31] (CVPR ‘22)). Thus our CROMA framework can best be considered a combination of CoCa and MAE. By leveraging MAE-style masking, we save on compute and provide a target for reconstruction; the efficiency of performing contrastive learning on masked-out cross-modal samples was also found in concurrent work, FLIP [123] (CVPR ‘23). These innovations are significant and justify the broad audience that NeurIPS attracts. Two papers came to similar conclusions in parallel with us, CAV-MAE [50] (ICLR ‘23) and MaViL [52](available on arxiv, also under review), but these papers are in the audio-visual domain and are not designed for spatially aligned data that is ubiquitous to RS and other applications. Our multimodal decoder is designed explicitly for spatially aligned multimodal data as tokens in the decoder predict both hidden radar and optical patches corresponding to a precise location on the ground—this is not possible in the CAV-MAE or MaViL frameworks.
Additionally, with X-ALiBi, we bias the cross-attention matrix based on the relative locations between cross-modal patches—this is only possible with spatially registered data and is the first time position encoding has ever been leveraged in cross-attention. Finally, by providing true relative position encoding that is both translation and rotation equivariant, 2D-ALiBi outperforms the SoTA in position encoding for ViTs, PEG [118] (ICLR ‘23). ALiBi [119] (ICLR ‘22) is a groundbreaking method in NLP that is now the method of choice in many SoTA LLMs because of its extrapolation abilities—an extension of it to 2D data is also novel.
Overall, CROMA builds and improves on methodology recently published in NeurIPS and conferences with similar audiences and impact. Not to mention, CROMA substantially outperforms SatMAE [26] (NeurIPS ‘22) and provides a more thorough evaluation of learned representations. We thank you for asking this question, as it demonstrates that the novelty of CROMA was not sufficiently articulated—we will integrate this discussion in our revision.
---
Rebuttal 2:
Title: Thanks for the rebuttal
Comment: Thank the authors for their rebuttal. It partly addressed my concerns, except for the novelty.
This is a good practice to combine contrastive learning and masked image modeling on multi-modal satellite images; however, I cannot regard it as an innovative approach. There is little knowledge improvement for me.
Authors continually emphasize that their approach goes beyond SatMAE, which is unfair due to different pre-training data and improper baseline, i.e., SSL4EO v.s. fMoW-Sentinel. The pre-training data should be aligned. The right baseline should be SatMAE+ contrastive learning to illustrate your combination is non-trivial; otherwise, it cannot convince most readers in NeurlPS.
I appreciate the extensive empirical results from this manuscript. However, I cannot recommend this manuscript be accepted by NeurlPS at this round, given its limited novelty, unfair comparison, and insufficient theoretical support.
---
Rebuttal Comment 2.1:
Comment: Thank you for your continued engagement in our work.
Regarding novelty, our method is a novel combination of existing methods: cross-modal contrastive learning, multimodal masked autoencoding, and attention with linear biases (ALiBi). Our method was motivated by the intuition that we describe in our paper: (i) that contrastive and reconstructive pretraining objectives learn different representations that might be complementary when combined and (ii) that EO data would benefit from rotation and translation invariant relative position encoding. This research framework—that uses intuition to combine existing methods in novel ways—is strongly represented in NeurIPS every year.
Regarding comparisons, we extensively compare CROMA to all foundation models for EO. There are many recent frameworks invented in computer vision that we could leverage to pretrain models on the SSL4EO dataset—but doing so for all new approaches is not practical. Specifically, two concerns are raised: (i) we do not compare to a SatMAE model pretrained on SSL4EO, and (ii) we do not compare to a “SatMAE + contrastive learning” framework. Regarding the first concern, we do not believe that SatMAE pretrained on SSL4EO would improve SatMAE’s performance on benchmarks because, qualitatively, the data distribution of fMoW-Sentinel is closer to these benchmarks than SSL4EO (primarily, the sizes of images). In fact, CROMA outperforms SatMAE when finetuning on the fMoW-Sentinel dataset—this demonstrates that our framework learns better representations than SatMAE. Regarding the second concern, the “SatMAE + contrastive learning” framework does not exist, but we could try it. However, we do not expect it to outperform CROMA because CROMA outperforms VICRegL (see our appendix), which outperforms unimodal MAE + contrastive learning (see the VICRegL paper).
We would have been happy to address these two concerns with experiments during the rebuttal period, but these concerns were not raised during the original review. Overall, we are disappointed that these new concerns dropped our score from a borderline accept to a borderline reject. | Summary: The paper presents a self supervised representation learning model for multimodal sentinel images. The model learns from geographically aligned optical and radar (sentinel-2 and sentinel-1, respectively) representations that are then used for downstream tasks, such as classification and segmentation. In the paper, classification and segmentation tasks are illustrated by using different approaches, namely, fine tuning, linear probing and nonlinear probing (MLP). Authors also show other quantitative evaluations (knn, kmeans over classes, a UMAP) to show the quality of the learned representation over SatMAE [26], which is the main competing method.
Strengths: - The paper deals with an important topic, and the models and results presented in the paper are significant for a variety of applications making use of sentinel data.
- Results are validated on well known datasets in the field, and show superiority over a series of strong baselines and several metrics. Ablation study is very complete and shows how the model performs under different changes in the modules.
- The approach of combining masked out reconstruction and contrastive losses in learning SSL representations is, to the best of my knowledge, novel in the field of geographically aligned data. The use of different modalities is also interesting, although the synergistic use of optical and radar images is well known and studied, but for specific applications.
- The paper is very dense but clear enough, well written and well structured.
Weaknesses: I think that the paper is sound and although it touches upon a niche application of computer vision that might not be of wide interest for the NeurIPS audience, it could be a good contribution. However I have a series of comments that I think could be addressed and improve the paper. In general:
- The data description and the explanation of the different levels of preprocessing for each dataset (eg atmospheric corrections from L1C to L2A), and how those influence the model, are not well explained. For instance, sentinel 2 has 13 channels, but of which only a subset are useful for land cover applications. Also, spatial resolution of the different S2 channels is very different (from 10 to 60m) and the one from S1 changes depending on the processing levels.
- The different benchmark tested are characterised by different preprocessing and channel subsets, and it is unclear how the models are fine tuned in this setting and what is the dependency on those aspects.
- I felt that the related work section at L72 makes plenty of references but it is not so good at clarifying some main lines of research, pros and cons of those. There are many papers, maybe too many, and it is hard to get some information out of that.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors:
- L91: I found the concept of "optionally multimodal" very interesting, and I think it could be better framed. I am not sure whether the proposed CROMA it is indeed optionally multimodal, but the fact that some image pairs are not available concurrently is a very common issues which is often solved by temporal composition, but might not be optimal for some fine grained monitoring applications.
- L109: it is not completely clear to me why (beyond the ablations) RPE offer, sometimes, good performance when extrapolating over larger images. At the same time, i think, contrarily what stated afterwards, the wild variability of remote sensing images is not an issue: images are referenced absolutely to geographical coordinates and can be cropped following the constraints of the task and hardware.
- What is not so clear to me, is how the choice of spectral channels for S2 is done, and how resolution is dealt with. S2 channels have a ground sampling distance that varies between 10m, 20m and 60m, while radar can vary depending on the processing level. It is unclear how patches are sampled and how the mismatches in resolution is dealt with, since if sampling eg a 80x80 pixels patch, the actual content varies a lot unless interpolated and upsampled, which could introduce artefacts in itself. I think these data preprocessing steps should be better explained.
- It is unclear at which level of processing the data is used, whether L2A corrected or L1C Sentinel-2 data, and if so, how the processing is performed. It is also mentioned that 12 bands are used, but in facts S2 has 13. Just that some of these bands are used for sensing properties of the atmosphere such as clouds and aerosols, and do not help in performing land-cover / land-use related modelling. Again, I think that some of these aspects should be clarified in the main text.
- I wonder why [86] has not been included in the baselines, as it is one of the main papers highlighted in the related work section dedicated to RS representaiotns.
- In light of the above points, some datasets used to test the model have very different properties and characteristics. how is the CROMA model pretrained on 12 S2 channels, retrained for each of the benchmark making use of the specific data provided? eg the fMoW, as far as i remember, only make use of 8 channel and not 12.
- L203 and following: It is unclear why no other optical image participates in the definition of negative samples, this could be maybe beneficial to encode differences in landcover at different locations, or account for seasonality effects.
- L240: it is unclear what "single label benchmarks" are.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: - Limitations shortly mentioned in the conclusion section. I agree with those highlighted.
- I had the feeling when reading the paper, the fact that CROMA was performing better than SatMAE, that is all it was needed. I think that SatMAE comes with pros and cons that are not very well discussed and framed. I think that since the paper is comparing and improving directly upon SatMAE, these aspects could have been better presented.
- I really missed a test on non Sentinel data. I do agree that Sentinel is a great data source, but it is not the only source used, particular when studies need to go back before 2015/2016. It would have been nice to see some results on other satellite data, but I understand that this could have been too much work or out of scope, but I would nonetheless mention it (this goes beyond higher spatial and spectral resolution, it is often interesting to transfer models to lower spatial and spectral resolution).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments on our work. We believe we can address all the concerns you raise.
**Q**: *Optionally multimodal.*
**A**: CROMA is indeed optionally multimodal. In sections 4.1 & 4.2, we only use the optical encoder (Sentinel-2-only tasks). In section 4.3, we use all three encoders (multimodal tasks). And in section 5.1, we use all three combinations (optical-only, radar-only, and joint). Crucially, each of the three combinations of encoders requires a single pretraining run that jointly pretrains these three encoders end-to-end.
**Q**: *RPE.*
**A**: RPE captures the relative locations between tokens in a transformer rather than their absolute locations in the sequence. As ALiBi [119] (ICLR ‘22) shows, RPE methods often fail to extrapolate to longer sequences effectively. PEG (ICLR ‘23) showed that a simple convolution between the 1st and 2nd ViT layers (along with discarding position embeddings) improves performance and enables extrapolation. For our application, we show that 2D-ALiBi outperforms PEG with and without extrapolation. 2D-ALiBi is a true RPE method that is both translation and rotation equivariant, these properties are desirable for EO imagery.
The sizes of imagery in RS benchmarks vary considerably. Still, we agree that users can query and crop imagery to their liking if they have access to the original data sources. Besides this, image-size extrapolation would be a significant feature for RS foundation models. By increasing or decreasing the number of patches, users can vary the amount of compute they wish to spend on a sample. To illustrate this, we finetune CROMA-B and SatMAE-B on EuroSat for 50 epochs on **96x96px** images—each model is trained once using the default hyper-parameters of each model. Then, we evaluate each model on the EuroSAT validation set at various resolutions:
| Model | Test Resolution -> | 32x32 | 64x64 | 96x96 | 120x120 | 224x224 |
| -------- | ------- | ------- | ------- | ------- | ------- | ------- |
| SatMAE-B | | 64.1% | 84.2% | 99.1% | 98.3% | 61.4% |
| CROMA-B | | 90.7% | 97.6% | 99.2% | 98.3% | 83.1% |
For instance, if a user requires fast predictions only possible on 32x32px images, CROMA will outperform SatMAE by 26.6% on EuroSAT.
**Q**: Data preprocessing.
**A**: For pretraining, we use the SSL4EO dataset assembled, preprocessed, and published by another lab [85]. This dataset provides paired Sentinel-2 L2A data (atmospheric corrections removing the cirrus band, resulting in 12 bands) and Sentinel-1 GRD data—this is briefly stated on L220 of our paper. The data is already upsampled to a 10m per pixel spatial resolution for all relevant channels. According to the SSL4EO codebase, using Google Earth Engine, they query: (i) the 'COPERNICUS/S2_SR' collection, filtering images with greater than 10% cloud coverage, and bilinear spatial upsampling, and (ii) the 'COPERNICUS/S1_GRD' collection in interferometric wide mode with VV and VH channels. We selected the SSL4EO dataset for pretraining because it is the largest preprocessed Sentinel-1 & 2 imagery collection and can be easily downloaded, making replication feasible. Furthermore, given the data collected by SSL4EO, before feeding them into our models, we normalize all channels exactly like SatMAE [26]. In our revised appendix we will clarify how data is preprocessed.
**Q**: *Handling 13 channels.*
**A**: The Sentinel-2 benchmarks contain either 12 (with cirrus removed) or 13 channels (with cirrus included). When applying CROMA to benchmarks that contain 13 channels, we drop the cirrus band, giving our model 12 of the 13 available channels. We do not re-train CROMA for each benchmark—all CROMA models (CROMA-B, CROMA-L, and ablations) are pretrained on the SSL4EO dataset described above. fMoW-Sentinel contains 13 channels [26].
**Q**: *SatViT-V2 comparison.*
**A**: SatViT-V2 [86] only processes Sentinel-1 & 2 data stacked along the channel dimension—it is not optionally multimodal. Therefore, it cannot process the optical-only datasets we evaluate in sections 4.1 & 4.2. In section 4.3, we compare CROMA with SatViT-V2 on two multimodal benchmarks, BigEarthNet [126] and DFC2020 [137]—CROMA significantly outperforms SatViT-V2, which was pretrained using masked autoencoding.
**Q**: *Negative optical samples in contrastive learning?*
**A**: We selected this cross-modal contrastive learning (CMCL) framework because it is the standard in the literature. The representations of non-matching samples of the same modality are much closer to each other than the representations of matching samples of different modalities. This has been widely observed [*1, *2, *3] and is now coined the “modality gap”—nicely illustrated in Figure 1 of “Mind the Gap” [*1]. We observe this same modality gap between optical and radar representations in CROMA (both at initialization and convergence). Including optical samples as negatives in the CMCL calculation amounts to introducing very difficult negatives. Although “hard negatives” can improve representations, overly difficult negatives can hurt representations [*4, 36, 145].
In our appendix, we did experiment with hard negatives via hard negative mixing (HNM, [145])—this mixes optical representations with radar representations to build hard negatives. We show HNM hurts representations across all 6 evaluations. Using optical representations as negatives would create even more difficult negatives than mixed optical and radar representations.
[*1] “Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning,” in NeurIPS ‘22
[*2] “Towards understanding the modality gap in CLIP,” in ICLR ‘23 Workshop
[*3] “Understanding and constructing latent modality structures in multi-modal representation learning,” in CVPR ‘23
[*4] “Contrastive learning with hard negative samples,” in ICLR ‘21
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: I'd like to thank the Authors for the follow up.
From my perspective, the rebuttal is clear, and definitely improves my understanding of the paper. There are still a couple of minor points that are not explicitly addressed, but I agree those are not worthy discussion at this stage and can be directly incorporated in the camera ready.
I think the contribution is interesting, and overall relevant not only to the EO community, so I am happy to increase the score to weak accept, and looking forward to discuss further with other reviewers, if needed. | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback. Before replying to reviewers individually we will restate our contributions:
* We leverage reconstructive and contrastive objectives to learn joint multimodal representations. This is not only novel for Earth Observation (EO) but is novel for multimodal 2D data. Of the many multimodal learning frameworks recently published, only two also learn multimodal representations of 2D data via these combined pretraining objectives: CAV-MAE [50] (ICLR ‘23) and MaViL [52] (available on arxiv, also under review). Both CAV-MAE and MaViL were developed concurrently with our work. Both frameworks are designed for audio-visual signals, not spatially aligned sensor data that we focus on. This feature lets us fuse corresponding cross-modal patches spatially—learning joint multimodal patch representations. We are glad that reviewers iZ73, too9, and FExC acknowledge this contribution.
* We extend ALiBi [119] (ICLR ‘22) to 2D data and show that 2D-ALiBi outperforms PEG [118] (ICLR ‘23) for our application and is novel to ViTs. Additionally, our X-ALiBi method is the first time position encoding has been leveraged in cross-attention, not just in our application. These methods allow CROMA to effectively generalize to smaller or larger images at test-time without further training. This means that CROMA models—and ViTs that will leverage our work by using 2D-ALiBi—offer a superior trade-off between accuracy and compute. We are glad that reviewer FExC “really likes” this contribution and seems interested in building on it.
* We thoroughly demonstrate that CROMA significantly outperforms the previous SoTA, SatMAE [26] (NeurIPS ‘22). Our evaluation consists of finetuning, linear probing, nonlinear probing, kNN classifying, and K-means clustering the learned representations of CROMA and all other SoTA foundation models for Earth Observation under identical conditions. Additionally, two of our ablations are equivalent to adapting other SoTA algorithms in computer vision to our domain: (i) Fast Language Image Pretraining (FLIP [124], published at CVPR ‘23) performs cross-modal contrastive learning with masked-out samples to efficiently outperform CLIP [124], and (ii) VICRegL [143] (NeurIPS ‘22) combines the VICReg [142] objective between image representations and an MSE patch-wise invariance objective to learn SoTA local representations (see this experiment in our appendix). CROMA outperforms both. We are glad that reviewers iZ73, too9, and FExC view our extensive experiments and ablations as a strength.
We acknowledge that our paper is dense and are glad reviewers iZ73 and FExC liked our writing and presentation. Our extensive experiments and ablations were only made possible by this density and by assuming our readers had a prior understanding of related work—specifically, vision transformers, masked autoencoders, contrastive learning, and prior foundation models for EO. Without this background, we completely understand how our paper may be challenging to comprehend. Should our paper be accepted to this conference, we will be granted a 10th page which we will dedicate to providing more background, clarifying our contributions, and including more experimental conditions in our main text (as of now, much of our experimental conditions are in the appendix).
Our paper improves on methods recently published at this conference and conferences with similar audiences and impact. As reviewer FExC correctly points out, many other domains feature multiple sensors whose data are spatially registered. Our framework can be directly used for these applications. We also see tremendous potential for X- and 2D-ALiBi, (or variants building on them, e.g. 3D data) that can be broadly applied to transformers. We are targeting this conference to publish our work for these reasons, not only because we outperform SatMAE.
We are happy to continue our discussions with all reviewers. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
$\texttt{TACO}$: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning | Accept (poster) | Summary: This work proposes a simple yet effective temporal contrastive learning approach for encoding the high-dimensional observations and inputs for reinforcement learning. The authors propose a loss function (TACO) related to the mutual information between representations of current states paired with action sequences and the future states. By jointly optimizing this TACO loss with CURL loss and reward prediction loss, the proposed method outperforms the SOTA representation learning algorithms in on-policy and off-policy RL frameworks. They also validate their effectiveness in model-based frameworks.
Strengths: - The proposed method outperforms the SOTA algorithms both in model-free and model-based frameworks
- The experiments were conducted in various environments with multiple baselines (including on-policy, off-policy, and model-based).
- An appropriate ablation study was performed for each loss term.
Weaknesses: - In terms of TACO loss, the difference between DRIML and the proposed method lies in the fact that the proposed method considers the entire action sequence up to t+K, rather than just observing single action. Looking at Figure 7 in the supplementary materials, it can be seen that in 4 out of 9 environments, TACO performs better when K=1 compared to K=3. This could suggest that extending DRIML appropriately to a continuous action space yields performance similar to TACO.
- The authors mention that they implemented some baselines based on DrQ-v2 and additionally considered action as an input (lines 258-260). However, there is no detailed explanation of this implementation, making it somewhat difficult to ensure a fair comparison.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Regarding “recognizing the significance of learning action encoding, as discussed earlier, we’ve integrated action representation learning into all these baselines.” could you provide a more detailed explanation of how action inputs were considered in ATC and SPR in the existing methodologies?
- In lines 257-258, it is stated, "Without the DrQ-v2 backbone algorithm, the performance reproduced by their original implementation is significantly worse." What is the difference between extending DRIML to a continuous action space and DRIML with the DrQ-v2 backbone you used as a baseline? Which of these differences do you think has the most significant impact on performance?
- Does the RL loss flow into the encoder, or does the encoder only update through the loss in Equation 3-5? If it is the latter, how does the RL loss affect performance?
- Does the performance decrease if the action encoder is not used and only projection is performed?
- Line 318: Please check the name of the algorithm (ACO → DBC for [50]).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors mentioned the limitation in the conclusion section. Since they utilize InfoNCE objective and the performance of the proposed method depends on the batch size, the main objective of TACO impacts computational efficiency.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback and review! Below we address the concerns and questions that you have raised. We are encouraged that you appreciate TACO's significant empirical performance and recognize the comprehensiveness of our experiments in both online and offline RL, alongside the ablation studies for each loss term.
---
We would like to first explain the question around the implementation of other representation learning baselines.
> The authors mention that they implemented some baselines based on DrQ-v2 and additionally considered action as an input. However, there is no detailed explanation of this implementation, making it somewhat difficult to ensure a fair comparison.
> Could you provide a more detailed explanation of how action inputs were considered in ATC and SPR in the existing methodologies?
The learning objective of ATC loss does not involve action input. For SPR, it is implemented as follows.
```
z_hat = state_encoding(observation[0])
spr_loss = 0
for k in range(K):
u = action_encoding(action[k])
z_hat = h(torch.concat([z_hat,u], dim=-1)) ### transition model
z_next = state_encoding_target(observation[k+1]) ### target encoder
y_hat = q(g(z_hat))
y_next = g_target(z_hat)
spr_loss += -cosine_similarity(y_hat, y_next)
```
Here, h is a latent transition model, g is a projection layer, and q is the prediction head. Following the insights of TACO, we let the critic and SPR loss share the same acton encoder so that we only focus on the comparison of the temporal contrastive loss. In other words, we update action encoder from both SPR loss as well as the critic's TD loss. (See my comments below for a detailed discussion on this design choice for TACO.)
---
> TACO performs better when K=1 compared to K=3. This could suggest that extending DRIML appropriately to a continuous action space yields performance similar to TACO.
> What is the difference between extending DRIML to a continuous action space and DRIML with the DrQ-v2 backbone you used as a baseline?
We would like to clarify the key distinction between our approach and DRIML, as well as other representation learning baselines.
1. The original DRIML and SPR papers, build their methods on top of the C51 algorithm (DQN for SPR) and is specifically tailored to environments with discrete action spaces. C51 algorithm itself cannot extend to continuous action spaces. This is why we choose to re-implement DRIML on top of DrQ-v2, a simple yet strong online RL algorithm for visual continuous control.
2. DRIML focuses on environments with small, well-represented, abstract discrete action spaces, overlooking the importance of action representation learning.
3. In contrast, we identify the importance of action representaion learning in continuous control, an under-explored topic in previous works. We introduce TACO as a simple yet effecive approach to utilize temporal contrastive loss to learn state and action representations.
4. For the comparison with DRIML in Table 2, we also incorporate action representation learning in the same way as TACO and only focus on the comparison of the design of temporal contrastive loss.
5. Non-trivial improvement on Hopepr Hop and Acrobot Swingup: We still observe notable enhancements in tasks such as hopper hop (showing a 20.8% improvement) and acrobot swingup (with an 8.5% increase). This can be attributed to DRIML's contrastive loss positive pairs being policy-dependent, potentially causing stability concerns during policy updates. Contrarily, TACO's design is policy-neutral, offering a more stable solution.
In summary, the main contribution of our paper lies in both identifying the importance of action representation in continuous control and proposing TACO as a simple yet effective solution to address this problem.
---
> Does the RL loss flow into the encoder, or does the encoder only update through the loss in Equation 3-5? If it is the latter, how does the RL loss affect performance?
The action encoder is updated through both the critic's TD loss and the TACO loss, allowing it to learn more informative action representations. Specifically, when we exclude the TD loss from updating the action encoder, there is a noticeable performance drop, with the 1M online performance falling from 541 +- 38 to 492 +- 44 on Quadruped Run, and from 261 +- 52 to 211 +- 86 on Hopper Hop.
---
> Does the performance decrease if the action encoder is not used and only projection is performed?
Yes, the performance does indeed decrease if the action encoder is not used and only projection is performed. By excluding the action encoder and taking raw action as input for the critic, the 1M online performance drops to 499 +- 32 on Quadruped Run and to 221 +- 36 on Hopper Hop, down from 541 +- 38 and 261 +- 52, respectively. Compared to DrQ-v2's 1M performance of 407 +- 21 for Quadruped Run and 192 +- 41 for Hopper Hop, we still observe a noticeable improvement as state and action representations benefit from the temporal contrastive loss. Nevertheless, these findings emphasize the importance of the action encoder and suggest that allowing the critic and temporal contrastive loss to share the same action embedding effectively aids in learning action representations. We appreciate the reviewer's insight and question, and we will clarify this point in our revised manuscript.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I appreciate the authors for their detailed responses to my questions. However, there are some parts of the authors' response that are a bit difficult to understand. I'll write down the parts I understood, and please let me know if the following explanations are incorrect.
1. The only difference between DRIML with the DrQ-v2 and TACO is whether k is 1 or can take non-1 values.
2. Additionally, among the 5 environments shown in Table 2, TACO performs best when K=1 in 2 of them. (Looking at Figure 7 in the supplementary materials, it can be seen that in 4 out of 9 environments, TACO performs best when K=1.)
3. In other words, simply changing from C51 to DrQ-v2 in DRIML already achieves SOTA-level performance in 4 out of the 9 environments.
If the above content is correct, considering that the contribution of DRIML or TACO is not about which RL agent to use but rather in the context of representation learning, it might suggest a significant limitation in the novelty of TACO.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for your response. We believe there may be some misunderstandings regarding our previous response. In Table 2 of our original manuscript, when comparing TACO to DRIML, we did not just simply adopt the DRIML objective from C51 to DrQ-v2 for continuous actions. We also incorporated insights from TACO, allowing DRIML to learn an action representation jointly from both the temporal contrastive loss and the critic's TD loss. As shown in our manuscript and earlier response, this important insight has led to a significant performance boost for TACO. We also incorporated it into the implementation of the baseline DRIML algorithm because our focus here was solely on comparing the design of the temporal contrastive loss. Yet, as illustrated in Table 2, we observed non-trivial improvements in tasks like Hopper Hop (a 20.8% increase) and Acrobot Swing-up (an 8.5% rise). As explained in our earlier response, this is potentially due to the inherent limitations in the design of the temporal contrastive loss in DRIML, which results in the positive relationships being policy-dependent and unstable. | Summary: This paper introduces an auxiliary objective based on contrastive learning to learn action and state representation for continuous control benchmarks. The auxiliary objective is called TACO. The main idea behind the objective is to maximize the mutual information between the current state s_t, current and future actions {a_t, a_{t + 1}, … , a_{t + k}} and the future state s_{t + k}. They show that they can outperform various model-free, model-based methods on various environment in the deepmind control suite.
Strengths: The main thing I like about the paper is the breadth of applicability of the approach. The authors use the objective for both model-free and offline RL methods. The method is compared against relevant baselines in both these cases. The authors also compare against other relevant methods that use auxiliary pretaining objectives for learning state representations. This thorough comparison shows the effectiveness of the approach.
Weaknesses: The main concern I have is the use of reward prediction and curl objectives. The main novelty of the paper seems to be the contrastive objective in equation 3. But it seems that there are no experiments that evaluates and studies this objective in isolation. Figure 6b removes one objective at a time but it would be nice to study the effective of removing both reward prediction and curl objectives. Without this it is hard to say whether the empirical gains are justified by the motivation and theoretical claims of the paper. I would be happy to increase the score if this point is addressed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It seems that the TACO objective is used in conjunction with the underlying RL algorithm and not as a pertaining step. I wonder if the authors have tried using it for pretraining? If not, is there any reason why the authors have not used it for pretraining?
I could not find the architectural details of H_\theta and G_\theta, could the authors specify the details of these?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and review! We are encouraged by your recognition of the broad applicability of our approach, as manifested in our application of TACO for both model-free and offline visual RL settings.
---
To address your question of CURL and reward prediction loss, below we conduct a comprehensive experiment by running TACO without CURL and reward prediction loss on five tasks (same as the five tasks in **Table 2** of our original manuscript): Quadruped Run, Hopper Hop, Walker Run, Reacher Hard, Acrobot Swingup. Here we show the 1M online RL performance.
| | TACO | TACO w/o reward & CURL| DrQv2|
| ----------- | ---- | ----------- |----------- |
| Quadruped Run | 541 +- 38| 501 +- 24 | 407 +- 21|
| Walker Run | 637 +- 21| 615 +- 11 | 517 +- 43|
| Hopper Hop | 261 +- 38| 242 +- 12 | 192 +- 41|
| Reacher Hard | 883 +- 21| 882 +- 67 | 572 +- 51|
| Acrobot Swingup| 241 +- 21| 301 +- 42 | 210 +- 12|
As the experimental result suggests, for initial test tasks like Quadruped Run, Walker Run, and Hopper Hop, omitting reward prediction and CURL loss seems to have a slight negative influence on performance, albeit not substantial. This observation led us to incorporate these two auxiliary losses into our final objective. Interestingly, we also found that in the Acrobot Swingup task, the agent's 1M performance actually improved when these two losses were removed, increasing from 241 to 301.
These experimental findings reinforce our claim that while CURL and reward prediction loss could further improve the performance in many tasks, the proposed temporal contrastive loss of TACO is indeed the central and most impactful component.
---
Next, we would like to address your two other questions.
> Question1: I wonder if the authors have tried using it for pretraining?
Exploring the use of TACO loss as a self-supervised pretraining objective for both state and action representation is indeed an exciting future direction. In the current paper, our primary focus lies in online and offline RL learning from scratch without any prior knowledge. Recognizing the interest in this aspect, and despite the limited rebuttal period, we conduct an additional experiment to highlight the potential of using TACO for pretraining these representations. The detailed results can be found in **Figure 18** of our attached PDF.
Specifically, we experimented with pretraining on an offline dataset of **Walker Walk Replay** (generated in a manner consistent with our "Replay" dataset in offline RL experiments) to test the learned representation's generalization to a new task, **Walker Run**, evaluating with both online RL and few-shot behavior cloning. For online RL, we initialize DrQ-v2 with the pretained state and action encoders. For few-shot behavior cloning, we initialize the policy with the pretrained state encoder. As demonstrated in **Figure 18** of the attached 1-page PDF, by pretraining on the walker walk dataset, the state and action representation indeed captures the essential information for the shared embodiment (walker) across two different tasks. Thus, it facilitates both efficient online RL training and few-shot imitation learning.
---
> Question 2: I could not find the architectural details of H_\theta and G_\theta, could the authors specify the details of these?
G is a two-layer Multilayer Perceptron (MLP) where the input size is observation feature dimension plus K (number of timesteps in TACO) times the latent action dimension. The output size of G is the same as the observation feature dimension. H is also a two-layer MLP, with both its input and output sizes being the observation feature dimension. Both G and H utilize a hidden layer size of 1024, and same as DrQ-v2, the observation feature dimension used in TACO is 50.
---
Rebuttal Comment 1.1:
Title: Additional Questions?
Comment: Thank you again for your constructive feedback. In our earlier response, we provided an additional ablation study on the CURL and reward prediction loss, as well as another experiment using TACO as a feature representation pretraining objective. If you have further questions, we are more than happy to answer them. | Summary: The paper introduces TACO, a framework that learns state and action representations simultaneously in visual reinforcement learning for continuous control tasks. TACO optimizes mutual information between current state-action pairs and future state representations. It additionally optimizes 2 auxiliary losses. Experimental results demonstrate TACO can achieve great performance gains in online and offline RL settings. TACO offers a flexible and stable approach for capturing essential information in high-dimensional continuous control tasks.
Strengths: - TACO is a simple yet effective framework that learns state and action representations by an auxiliary contrastive learning task. TACO could be integrated into both online and offline visual RL algorithms flexibly.
- Extensive experiments on the DeepMind Control Suite demonstrate that TACO has outstanding performance.
- The paper provides theoretical analysis of TACO's objectives.
- In general, the paper is well-written.
Weaknesses: - The proposed contrastive learning objective is very similar to DRIML. Both methods improve the performance of the model-free agent by enhancing the predictability of the latent representation through contrastive learning. The major difference is that in this paper, the whole action sequence is given instead of the first action only. Therefore, I think the novelty is limited.
- Extra parameter K is required to tune on different tasks.
- It would be better if the proposed method can be evaluated in more environments besides DMC.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Line 134-136, "maximizing this mutual information objective ensures that the learned representations are sufficient for making optimal decisions." However, Thereom 3.1 only shows that the optimal Q function is the same with equivalent state and action representations. How do you ensure the optimal action will be the same?
- In Eq. 4, Why do you use a learnable parameter $W$ as a similarity measure, as the features have already been projected to the latent space by network G and H?
- Is the action encoder only used in the temporal contrastive loss or is it also incorporated in policy learning?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The computational limitation is considered in the paper. In addition to the time complexity, I am also curious about the GPU memory cost caused by the large batch size.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback and review! We are encouraged that you recognize TACO's simplicity, flexibility, outstanding performance, and theoretical analysis, all of which contribute to the strength of our approach. Below we address the concerns and questions that you have raised.
---
> Novelty of our approach: comparison with DRIML
We would like to clarify the key distinction between our approach and DRIML, as well as other representation learning baselines. Below we list the comparison with DRIML in bullet points.
1. DRIML & SPR builds on top of C51/DQN algorithm which do not extend to environments with continuous action space.
2. DRIML & SPR focus on environments with small, well-represented, abstract discrete action spaces, overlooking the importance of action representation learning.
3. In contrast, we identify the importance of action representaion learning in continuous control, an under-explored topic in previous works. We introduce TACO as a simple yet effecive approach to utilize temporal contrastive loss to learn state and action representations.
4. For the comparison with DRIML in Table 2, we also incorporate action representation learning in the same way as TACO and only focus on the comparison of the design of temporal contrastive loss.
5. Non-trivial improvement on Hopepr Hop and Acrobot Swingup: We still observe notable enhancements in tasks such as hopper hop (showing a 20.8% improvement) and acrobot swingup (with an 8.5% increase). This can be attributed to DRIML's contrastive loss positive pairs being policy-dependent, potentially causing stability concerns during policy updates. Contrarily, TACO's design is policy-neutral, offering a more stable solution.
In summary, the main contribution of our paper lies in both identifying the importance of action representation in continuous control and proposing TACO as a simple yet effective solution to address this problem.
---
> Extra parameter K is required to tune
We ackowledge that this is one limitation of TACO. However, we would like to point out that in TACO, we only use K=1 or K=3 in all of our experiments. Thus, it requires minimum hyperparameter tuning efforts to find the best K.
---
> It would be better if the proposed method can be evaluated in more environments besides DMC.
We appreciate your suggestion to evaluate our method in diverse environments. In response, we have selected six challenging tasks within the **Meta-world** domain to test the online learning performance of TACO against the DrQ-v2 baseline. The learning curve, presented in **Figure 17** of the attached PDF file, once again demonstrates TACO's significant performance improvement. It successfully solves complex tasks such as hammer, assembly, disassemble, stick pull, and pick place wall, which DrQ-v2 cannot accomplish within 2 million online interaction steps. These results underscore that the insights of TACO extend beyond the locomotion tasks of the DeepMind Control Suite to complex and intricate robotic manipulation tasks.
---
> How do you ensure the optimal action will be the same?
>
In lines 132-134, we're not claiming that the optimal action will be identical. The result indicates that if two state-action pairs possess the same state and action representations, their optimal Q values coincide. Thus, the optimal value function can be factorized as $Q^*(\phi(s), \psi(a))$. This means for a given state, if two actions share the same representation, they'll have an equal optimal Q value. Thus, optimal actions do not have to be unique.
---
> In Eq. 4, Why do you use a learnable parameter W as a similarity measure, as the features have already been projected to the latent space by network G and H?
This is a design choice that we have made, following the same design choice as CURL and CPC. We could instead use the cosine similiarity measure, as done in MoCo, SimCLR, and CLIP. In our empirical evaluation on the quadruped run task, we find that using cosine similarity lead to a 3.5% decrease in performance at the 1M mark (541 ± 38 vs. 522 ± 56).
---
> Is the action encoder only used in the temporal contrastive loss or is it also incorporated in policy learning?
The action encoder is utilized both in the temporal contrastive loss and the critic learning, as illustrated in Figure 2. It is updated based on both the TACO loss as well as the Temporal Difference (TD) loss from the critic functions.
---
> GPU memory cost:
TACO's batch size of 1024 is four times that of DrQ-v2 but also only a quarter of the standard batch size in the contrastive learning literature. Using the same CNN architecture as DrQ-v2, TACO's GPU memory requirement for the Quadruped Run task is 4.8 GB, in contrast to DrQ-v2's 2.2 GB. This fits easily under a single RTX 2080 Ti GPU.
---
Rebuttal Comment 1.1:
Title: Additional Questions?
Comment: Thank you again for your constructive feedback. In our earlier response, we clarified our distinction from DRIML and conducted additional experiments on six tasks from the Meta-world domain. If you have further questions, we are more than happy to answer them. | Summary: This work introduces TACO, a novel state-action representation learning technique based on contrastive learning. Empirically, TACO outperforms both model-free and model-based visual RL baselines in both online and offline settings.
Strengths: 1. The work studies joint state-action representation learning, which is less studied than state representation learning.
2. The empirical results for the online setting seem promising and highlights the importance of both state *and* action representation learning.
Weaknesses: 1. The core novelty of this work is the contrastive temporal objective for learning state-action representations and section Fig. 6b shows that it is most responsible for the observed performance improvement. However, it's unclear how important TACO is in the offline RL experiments; TACO makes use of data augmentation (for the CURL loss) while the remaining baselines don't have access to augmented data; thus TACO has an unfair advantage. I would like to see ablations similar to Fig. 6b for the offline experiments.
2. None of the baselines considered learn a latent action representation. While Fig. 3 shows that TACO's action representation is important, it's unclear if it's any better than other methods that learn an action representation via e.g. a state-conditioned variational autoencoder [1].
I am willing to raise my score if the authors address these concerns.
**Minor comments:**
1. Fig. 6 a and b should have the same vertical scale to make the figures more easily comparable.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is there a specific reason why the authors use CURL as an additional auxiliary tasks (as opposed to e.g. SODA [2])? If CURL can be swapped with another method, it would be worth mentioning this.
2. Could the authors clarify the experiment in lines 215-231? You add 20 dummy dimensions to the action vector when learning the action representation, though I'm not sure what is meant by "1000 noise dimensions", and I am not sure where "the 4000 actions" come from.
3. Line 120: Is there a reason why the authors chose to let the action representation be independent of the state?
4. Line 115-120: The goal stated here is to compress the state-action representation while retaining information needed to solve the task. Is the action representation being compressed? The sensitivity analysis in Appendix D considers increasing the action dimensionality in Quadruped Run by a factor of 2.
[1] Laser: Learning a latent action space for efficient reinforcement learning. Allshire et al. ICRA 2021.
[2] Generalization in reinforcement learning by soft data augmentation. Hansen & Wang. ICRA 2021.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and review! We are encouraged that you recognize the novelty of our approach in joint state-action representation learning and appreciate the promising empirical results of TACO. Below we first address your two concerns.
---
**Offline RL ablation**:
We conduct additional experiments in which we remove the CURL and reward prediction loss from TACO, referring to this modified version as "TACO w/o R&C." In **Figure 16** of the attached PDF, we present the normalized rewards for both CQL and TD3+BC with TACO, TACO w/o R&C, and without TACO. These results underscore that the removal of reward prediction and CURL losses does not significantly diminish TACO's performance, emphasizing that they are not the primary drivers of TACO's superior performance.
**Comparison with LASER**:
First, LASER and TACO differ significantly in their action representation learning. While TACO uses action representation only for critic learning, LASER alternates between policy/critic learning and action representation learning, potentially causing instability in RL learning, as the agent's latent action space continuously evolves throughout the training process.
LASER does not make their code publicly available, and it builds upon the state-input SAC algorithm without providing comprehensive experimental details. Therefore, we re-implement their learning objectives from scratch based on DrQ-v2. On Walker Run, we evaluated the performance of LASER with different dimensionalities of the learned latent action space.
| | LASER (dim=4)| LASER (dim=6)| LASER (dim=8) | DrQ-v2 | TACO |
| ----------- | ----------- | -----------|-----------|-----------|-----------|
| Walker Run | 251 +- 79 | 492 +- 51 | 511 +- 41| 517 +- 43| 637 +- 11 |
(Note: the original action space of Walker Run is 6 dimensional and TACO's latent action space dimension is 8.)
Our findings suggest that LASER's performance suffers from the iterative updates between RL and action representation learning, limiting its improvement over DrQ-v2, particularly when compressing the action space. However, we must acknowledge that due to the time constraints for the rebuttal phase, our choice of hyperparameters for LASER may not have been optimal, potentially failing to reproduce their best results.
---
Next, here is the response to your specific questions.
>Question 1: Is there a specific reason why the authors use CURL as an additional auxiliary tasks (as opposed to e.g. SODA)?
We chose to use CURL, but we could also apply SODA loss in place of CURL loss. We test TACO with SODA loss on the Quadruped Run and Walker Run tasks, As shown in **Figure 15 (Left)** of the attached PDF, the results suggest that for Quadruped Run, CURL as the auxiliary loss outperforms SODA, but yields comparable performance on Walker Run.
>Question 2: Clarification on Line 215-231 (Figure 5)
The primary objective of this experiment is to assess whether TACO's learned action representation can effectively extract control-relevant action information. To further clarify, here is the Gym-like pseudocode for our modified Cheetah Run environment:
```
class CheetahRunNew:
def __init__(self, env, action_dim=6, distract_dim=20):
self.orginal_env = env
self.action_space = Box(-1.0, 1.0, shape=(action_dim + distract_dim))
self.action_dim = action_dim
def step(self, action):
return self.orginal_env.step(action[:self.action_dim])
```
In this modified Cheetah Run environment, while the action space dimensionality has been expanded to 26, only the first 6 dimensions are utilised. We then train TACO in this modified environment, setting the dimensionality of learned latent action embedding to be 6.
To evaluate whether the learned action representation indeed captures the information of the first 6 dimensions, we sample four actions, $a_1, a_2, a_3, a_4 \in R^6$, to act as centroids. For each of the four centroids, we generate 1000 augmented actions by adding standard Gaussian noises to the last 20 dimensions.
Our hypothesis is that actions with the same centroid should possess similar latent representations, given that they are fundamentally the same action. To validate this, we generated a t-SNE action embedding plot (**Figure 5** of our original manuscript), which revealed successful clustering of semantically similar actions.
>Question 3: Is there a reason why the authors chose to let the action representation be independent of the state?
The learned action representation does not strictly need to be state-independent. Indeed, we have explored the possibility of allowing the action representation to be dependent on the latent state. Through empirical testing on the Quadruped Run and Walker Run tasks (**Figure 15 (Right)** of the attached PDF), we find that whether the action representation is state-dependent or independent does not create a major difference in the performance of TACO. Thus, we opted for the simpler approach, allowing the action representation to be state-independent. We acknowledge that our empirical results do not rule out the possibility of allowing state dependent action representations, and we will clarify this in our revised manuscript.
>Question 4: Is the action representation being compressed?
The action representation in our approach is not primarily aimed at compression. Instead, our focus is on shaping this representation to align with the optimal Q function. Drawing an analogy with state representation, we assert that a valuable action representation ought to be able to linearly represent the optimal Q function and should also be predictive of subsequent states. These guiding principles inform our method in TACO, where the action embedding is systematically learned through a thoughtful integration of critic TD loss and temporal contrastive loss updates. We appreciate the reviewer's insight and question, and we will clarify this point in our revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for you response; the additional experiments and clarifications address my concerns, and I am now comfortable raising my score from Weak Accept to Accept.
The results in this paper are useful to the community, as they underscore the benefits of learning both state and action representations -- a concept that is fairly unexplored in the literature -- and the empirical results are strong in the online setting. Since performance differences in the offline setting are smaller, it would be easier to interpret results if you highlighted cells for which e.g. the difference between CQL/TD3+BC w. TACO and CQL/TD3+BC is statistically significant according to a paired t-test at a 95% confidence level. The significance is obvious for some tasks (e.g. CQL in Quadruped Run, Full-replay) but not so obvious for others.
A few additional comments:
1. To clarify the experiments in lines 215-231 I suggest replacing "Then we select four actions...from a standard normal distribution" in lines 223-227 with what you wrote in your rebuttal: "we sample four actions, $a_1, a_2, a_3, a_4 \in \mathbb R^6$, to act as centroids. For each of the four centroids, we generate 1000 augmented actions by adding standard Gaussian noises to the last 20 dimensions." I now see how that particular phrase was the source of my confusion.
2. I don't think you specify what the shaded regions denote in Fig. 4.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your response and additional comments. We are encouraged by your recognition of our work's importance in state-action representation learning for visual continuous control and our algorithm's strong empirical performance. We will integrate all of your suggestions, including highlighting tasks where our method significantly outperforms offline RL baseline algorithms, into our final manuscript. | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful questions and valuable feedback. We are encouraged that reviewers recognize the importance of our tackled problem in state-action representation learning (gh9U). They also appreciate the flexibility and applicability of our proposed approach, TACO, in both offline and online RL settings (1TK9, Y2ee, od4S), and highlight the outstanding empirical performance of our methods (gh9U, 1TK9, Y2ee, od4S). We have addressed all individual questions of reviewers in separate responses.
Additionally, we have attached a one-page PDF with more experimental results, including (**Figure 15**) two ablation studies - one substituting CURL with SODA, another using state-dependent over state-independent action representation; (**Figure 16**) an additional ablation study on CURL and reward prediction loss in offline RL; (**Figure 17**) online RL experiments for six Meta-world manipulation tasks; and (**Figure 18**) further experiments on representation pretraining using TACO.
Here we briefly outline the response to the two most common questions raised by the reviewers. The detailed response is included in each individual response.
**CURL and reward prediction objectives** (Reviewer gh9U, Y2ee):
In our initial tests on the Quadruped Run and Walker Run tasks, we discovered that CURL and reward prediction enhanced TACO's performance, leading us to incorporate these two losses into TACO's final objectives. However, we would like to emphasize again that the CURL and reward prediction objective are added only as auxiliary losses to further improve the performance. We conduct additional experiments on both online (Reviewer Y2ee) and offline RL (Reviewer gh9U) settings. Together with **Figure 6(b)** in our original manuscript, they demonstrate that CURL and reward prediction objectives are not the main driver of the superior performance of TACO.
**Novelty of our approach compared with other representation learning methods such as DRIML** (Reviewer 1TK9, oD4S):
We want to emphasize other representation learning objectives, except for CURL and ATC, were studied in environments with well-represented, small discrete action spaces, thus overlooking the importance of action representation learning. In contrast, we identify the importance of action representation learning in continuous control, an under-explored topic in previous works. We introduce TACO as a simple yet effective approach to utilize temporal contrastive loss to learn state and action representation. In our comparison with other representation learning objectives in the **Table 2** of our original manuscript, we fix the action representation learning part and focus only on the design of temporal contrastive loss. Still, we see some non-trivial improvements over other representation learning objectives. In summary, our contribution lies in both underscoring action representation in continuous control tasks and introducing a simple yet effective temporal contrastive TACO loss as a solution to state and action representation learning in visual continuous control problems.
Pdf: /pdf/bc366a973f7a67a210d00840c5e627f2d7d31e6b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Continual Learning for Instruction Following from Realtime Feedback | Accept (spotlight) | Summary: Training of a continuously learning, instruction following agent from feedback provided by users during collaborative interactions. The problem is that humans often give noisy reward and at irregular intervals. The method formulates the learning scenario as a contextual bandit problem and alternates between training and deployment stages. The main issues addressed is the irregular timed reward and credit assignment problem. The method addresses this using heuristics and demonstrated its effectiveness in the CREALBAR environment using human evaluation.
Strengths: The paper is straightforward to follow. The topic is very relevant. The evaluation and improvement throughout the interaction rounds are significant in many aspects. The experiments conducted are very thorough and complete and have convincing results to highlight the continual learning nature of the agent.
Weaknesses: - The assumption of feedback is matched with an action or close by actions (heuristics presented in the paper) is somewhat restrictive
- transitions without rewards are discarded and not fully utilised, quite a waste of resources
- requires expensive labelling process
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - if I understand correctly, since the method formalizes itself as a contextual bandit problem, maximizing the immediate expected reward, Line 77 says "Feedback signals are given in realtime, and are not timed to coincide with a specific action" will not be favorable for training since the corresponding action and reward is not even matching up?
- isn't the assumption of giving a binary reward and optimizing for the contextual bandit objective amounts to gradient descent/ascent on the right/wrong actions (with noise, since humans give reward quite randomly), which is almost equivalent to supervised learning? If this is the case, I feel like the continual learning part is just iteratively collecting more supervised learning data, which performance improvement is quite obvious
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations and broader impacts are complete and included in the appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions. We are looking forward to answering any follow up questions during the discussion period.
Assumption of feedback alignment: real time feedback follows patterns of human response, including its delays. So this assumption follows the role of humans in the process and how interactions happen in real time. It’s possible for humans to give feedback to decisions that were made further in the past. Our learning approach does not handle it, and will suffer when such a signal is given. However, this is not something we force our users to avoid, and empirically the approach handles this and other types of noise quite well, showing effective learning. In addition, this correspondence between feedback and action is also seen in prior work (e.g., COACH and TAMER). We agree it does not provide perfect credit assignment (an important direction for future work), but our approach is robust to it. Our heuristics make further assumptions, but our experiments generally show that they are not critical for effective learning.
Discarded training data: there are two conditions. When we don’t use heuristics, the data is essentially discarded. When applying heuristics, nearly all actions/transitions are assigned feedback (and thus used during training), except those that appear after the last feedback provided by a user in a rollout, as we can’t estimate what feedback the user would have given. This was a key motivation behind the heuristics. However, the heuristics didn’t prove critical, as noted above. It remains an open problem for future work to study if it’s possible to infer a learning signal for actions without feedback. We don’t claim to completely solve the problem of learning from such feedback, but to show an effective approach and its significant potential.
Expense of labeling: except the supervised data used for initialization (a very small amount) and the human evaluation, there is no conventional labeling cost involved in the actual continual learning. The cost of our experiments is paying for people to interact with our system, and user feedback is built into the interaction. It’s just like deploying a system, but we don’t have the ability to deploy a real product for people to interact with, so we create this scenario by paying people to act as users. This approach is advantageous not only against conventional data annotation, but also when compared to most RLHF methods, which rely on post-hoc preferences from third-party annotators (and then you do see high labeling costs).
Questions:
Q1, alignment between actions and rewards: To clarify this sentence, we mean that we don’t force users to provide feedback for each individual action the agent takes (e.g., by moving the agent only after the user gives feedback for the current action), but we instead allow users to provide feedback at any point in time as they observe the instruction being executed. This creates a more challenging signal than forcing humans to provide feedback after action (which would give perfect alignment), but would make for a very frustrating system use. In practice, alignments between the user-provided feedback and the intended actions are very good, especially after correcting for reaction time delays (as is common when handling human responses). There’s still noise, and the credit assignment problem is not completely solved. But our approach is robust to it, as demonstrated by our experiments that show effective learning over time.
Q2, equivalence to supervised learning: the objective does look a bit like supervised learning (except the IPS coefficient). However, it only appears like a supervised learning objective: the actual supervision isn’t coming from a gold-standard demonstration (i.e. a human demonstration of instruction execution). Instead, the data we train on includes trajectories sampled from our policy (conditioned on user instructions) annotated with feedback provided by users. Additionally, unlike supervised scenarios (and single-agent RL scenarios), this is a non-stationary environment, because humans constantly change their behavior. So the data is not coming from the same constant distribution, as assumed in supervised learning. So in practice while the objective appears similar to a supervised learning objective, the learning problem is far from a supervised learning problem. The objective is also derived from maximizing the value function (i.e., like in REINFORCE), and not by maximizing the data likelihood. But, the mathematical similarity to supervised learning is actually a benefit of our method, as supervised learning is more stable and predictable than many more complex RL approaches.
Another small note: while user-provided feedback is somewhat noisy, it is not so random that it is impossible to learn from, and analysis of the human-provided feedback shows that it is very high quality. This is also demonstrated by our results, showing very effective learning.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. Based on a better understanding of how alignment is achieved through continuous interactions between human/agent, I updated my score to 5. | Summary: The authors propose a method for online continually training an instruction-following agent based on user realtime feedback gathered for a collaborative game CerealBAR. The agent need to follow the human's instructions and complete the task. The paper utilizes the contextual bandit learning approach, with immediate rewards computed from user binary feedback. Through the evaluation of thousands of human-agent interactions, they demonstrate a significant improvement in instruction execution accuracy over time. They also conduct experiments with multiple training scenarios and verify the robustness of their approach.
Strengths: * The authors propose and deploy the methods in the collaborative human game. They collected agent-human interaction data will be helpful for future agent design.
* It is a pilot work to demonstrate the strength of continual learning from human feedback for instruction-follow agent. The authors shows that after training the agent from human-human dataset, the policy can be greatly improved from the online off-policy learning from human binary feedback.
* The authors conduct comprehensive online experiment with variants of learning choice and human post-interaction evaluation. The variants of learning choice show the robustness of the framework and the human post-interaction evaluation shows empirical analysis on the agent behavior improved after learning from human.
Weaknesses: * The framework deign is not very clear. The authors do not mention much the design of the policy model, which transform from human instruction and observation into action (Model part in section 4). This paper mainly discusses about how to define user feedback and how to train with continual learning given the feedback.
* The framework uses binary signal from the user feedback, which is limited to represent the human satisfaction or feedback. It can be difficult to generalize to other tasks with complex instructions and observations.
* The framework is relatively simple with REINFORCE algorithm. It would be more convincing if the authors can conduct more experiments on other standard off-policy learning algorithm for the robustness of the framework.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Although the model can achieve 15.4% absolute improvement through continual learning over time from continual learning, it is hard to gain insight of how good the an instruction-following agent is. In order to show benefit of off-policy learning in this framework, have we tried some baseline models for comparison, such as offline RL, due to the difficulty of additional online data collection?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The experiments is difficult to reproduce. Training policy with from human realtime feedback can lead to significantly higher cost compared with offline RL or user simulator, which is widely used in dialogue agent design.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions. We are looking forward to answering any follow up questions during the discussion period.
Model design: We provide the design of the policy model in the Appendix. In short, it is a modification of the architecture used by previous work (Suhr et al. 2019). As it is not a main contribution of our work, we have put it in the appendix. Our aim was to take a model that has been shown to work to some extent on this domain, and focus on the learning problem.
Binary feedback: Existing work on learning from human feedback for embodied agents uses this kind of binary, realtime feedback provided by users (e.g., in TAMER and COACH). Binary feedback is simple, and conceptually generalizes to other tasks and feedback scenarios. Effectiveness is an empirical question, of course. However, for example, LLM RLHF uses binary preference signals (albeit unlike our work: given by third party annotators, and are not embodied, not realtime), and which empirically works well for complex reasoning tasks. The simplicity of binary feedback is what allows interleaving it with the interaction. It also doesn’t require complex machinery to interpret it (like natural language feedback). Of course, it carries less information than other forms of feedback (e.g., natural language), even if it requires less of the user. Using such information rich (but more costly to obtain) forms of feedback is an orthogonal and open research question. They come with pluses (more information) and minuses (costly to get, harder to interpret).
Re experimenting with off-policy algorithms: please see the general response and “on using offline RL” below.
Insights on policy quality: We provide comprehensive analysis of policy errors as annotated by workers in Figure 3 (right), which we discuss in the main paper. We see that for nearly all of the error categories, the policy is improving over time; the only error category which remains roughly stable is the efficiency of the policy’s trajectory (roughly 6% of trajectories are marked as inefficient).
On using offline RL: we agree there are alternative offline RL methods that we could use to swap our REINFORCE-style optimization loop. Integrating them in this way won’t reduce the need to collect feedback data over rounds. These methods are also more complex, while we opted for simplicity in this first study of the problem (there are many open directions for future work). Swapping the whole process with offline RL is less clearly effective. We could swap the supervised initialization using offline RL with the seed data, and that might increase initial performance, or allow us to use less initial data. This would just give a better launching pad for our learning from feedback. Of course, this is an empirical question, which is orthogonal to the questions we study.
On using a user simulator: While using a simulator could potentially improve model performance as it would generate new data to train on, it fails to capture a fundamental aspect of agents deployed in real interaction with people: the dynamics that arise as real users adapt to the agent through interaction. Manually designing user simulators has been studied in the past and is orthogonal to our contributions; such a manually designed simulator would also be placing assumptions on the kinds of language that people use in interactions with our agents. Building a strong simulator is also a lot of engineering work. Our approach takes advantage of the interaction the system would have with users anyway, and because it derives data from these interactions, it’s always aligned with the data distribution observed in its actual interactions, something that is very difficult to achieve with simulation.
Reproducibility: Experiments with real human users are naturally more difficult to reproduce than experiments which rely on fixed / static datasets or simulators; we anticipate that all of the difficulty in reproducibility will arise from recruiting and maintaining a user base. However, we publicly provide code, implementation details, data, etc. Furthermore, conducting these experiments at the scale we did is a key contribution of our paper. Almost no open research that is reported in detail has been done with such deployments over long periods of time, as we do. The complexities are a natural by-product of the core research problem we study, and there is no way around them. Simulators, for example, avoid costs and increase reproducibility, but poorly reflect interactions with real users. For example, simulators are static, whereas real users are dynamic and constantly change their behavior. On a fundamental level, there’s a big difference between the stationary problem simulators provide, and the non stationary problem of a deployed system, as we study.
---
Rebuttal Comment 1.1:
Comment: Thank you for elaborating! The overall design is reasonable and the online continually learning study is valuable for future research. I updated my score to 7. | Summary: This work demonstrated a simple yet effective framework for continual learning in instruction following task utilizing human feedback. Using CEREALBAR as testbed, this work demonstrated the framework in abundant details, and show effectiveness through experiment results. This work also conducts various analysis related to human feedback during and post-hoc the interactions. Finally, various decision choices are compared.
Strengths: The method is new and well supported by the framework description, the experiment results, and the analysis in the paper. The writing is well organized and clearly written in general.
This work address embodied ai problem from a relatively new perspective. It demonstrates effectiveness on task performance as well as benefits on reducing annotation effort. This work also provides promising future directions centering around the human-agent dynamics like the form of human feedback, the communication style between human-agent.
Weaknesses: I don't find any noticeable weakness of this paper.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Figure 3: left, the x axis range indicates number of turk workers right? aren't they 108 in total?
2. Maybe I missed something, but how do you annotated the demonstration data? Also, since reward propagation is one of the main design choice, does the gold data's feedback distribution similar as those annotated by mturkers?
3. Do you have any clue on how user adaptation contribute to specific error decreasing? (figure 3, right)
4. line 271: not sure what "we deploy a single agent for all but FEWERDEMO in the first round" means. Can you explain more?
5. (Figure 4, right)There seems to be a golden ratio between positive and negative feedback emerging. What do you think of that?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are adequately addressed in supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions. We are looking forward to answering any follow up questions during the discussion period.
Q1, Figure 3 x axis: The x-axis represents a proportion of interactions (so it sums to 100), rather than the exact number of interactions. Each different colored bar represents the proportion of interactions in each round that were given a particular rating. We are using proportions here because the number of interactions per round differs slightly. We will clarify this in the paper.
Q2, demonstration data: To “annotate” the supervised demonstration data with feedback, we assign every action positive feedback, because we assume human demonstrations are gold-standard (i.e., everything a human follower did in the demonstration is correct). We don’t apply reward propagation heuristics to this data, because we assign positive feedback to each action, so in terms of the feedback distribution, demonstration data includes 100% positive feedback while the data collected during human-agent interaction includes examples with negative feedback.
Q3, adaptation and errors: This is a great question. We didn’t find that any particular errors correlated with particular user adaptations. We find that in the direct comparison between the initial and final models ($\theta_1$ versus $\theta_{11}$), where user adaptation is reflected in interactions (when both are deployed concurrently in the final round), the initial model still has significantly more errors in all categories except for Errors 5 (which is already very rare) and 6 (which stays static over time anyway). This comparison further shows the error trends reflect genuine improvements in the model, and are only partially influenced by user adaptation (although we indeed see some adaptation, and our experiment measured for it).
Q4, deployment: Thanks for bringing up this confusing wording. What we mean is that for {RewardProp, SimpleReward, NoNegative, SupOnly}, the agent in the first round is exactly the same, as all of these systems are pre-trained on the exact same set of training data, so there’s no difference between them in the first round. But, for {FewerDemo}, the model is pre-trained on a smaller subset of the data. So in practice in the first round, we only deploy two agents: one for {RewardProp, SimpleReward, NoNegative, SupOnly} and one for {FewerDemo}. We will clarify this in the paper. This allowed us to save some interactions (and money) without influencing the experiment.
Q5, feedback ratio: this is an interesting question, and we don’t have a conclusive answer to it. It does seem like the curves converge to a similar ratio across the different systems. We suspect we would need to run this experiment for much longer to confidently say that a consistent ratio emerges. This would likely require re-running with a disjoint pool of workers as well, to rule out the impact of the specific pool of workers. So, we don’t think we are ready to draw strong conclusions, but we do see what the reviewer is pointing to. This raises interesting directions for followup work, which our work enables, such as the impact of learning and interface design decisions on the long-term equilibrium of human user feedback behavior.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my questions! It is a very interesting work and the effort you put in the work is respectable. I will update my score to 8 | Summary: This strong work presents a systems contribution in a fully-fledged system for continual learning from language feedback, in the context of situated human-to-robot instruction following tasks. Using the CerealBar environment (roughly inspired the card game SET, with an embodied flair), this work starts by learning an agent's instruction following policy from a set of offline demonstration data. This agent is then deployed against a crowd of *real human users* who work together with the agent to complete more and more task instances in the CerealBar environment, with humans providing (sporadic) binary feedback in real-time.
After each learning round (a fixed number of episodes), the agent policy is updated from the real-time reward data, where the authors define a heuristic to assign the real-time feedback to individual (state, action) pairs taken by the robot; this heuristic is derived from empirical "delay in human response" data, as well as from the traditional RL literature for assigning credit to actions that aren't explicitly provided feedback. The agent update is a contextual bandit style update, using a variation of the simple REINFORCE policy gradient update (training effectively a 1-step RL policy).
Across 11 rounds of updates from real-world human data, the proposed system obtains impressive results, showing a constantly growing trend in both individual instruction execution accuracy, as well as total CerealBar score (with the steepest increase in performance happening across the earlier rounds). The work also has a series of systematic ablations (comparing the binary feedback based approach with "full supervision" - showing that the proposed bandit approach can almost match the same performance), as well as qualitative Likert scale results from the real crowd users actually interacting with this system.
Strengths: This is a well-written and extremely well-executed systems contribution - the task of learning to improve situated agents from language and binary feedback is incredibly timely, and every part of the proposed system is implemented carefully and studied thoroughly. The evaluation is strong (both qualitative and quantiative), and definitely show that not only is the agent able to effectively improve with feedback, but it's able to improve *efficientyly* and actually grow its capabilities over time. I am a huge fan of this work.
Separately, I think the dedication of this paper to open-sourcing all parts of the pipeline, including the multi-round crowdsourced interaction data is admirable, and will be an incredible resource to the community.
Weaknesses: In a paper such as this, I can understand it's very hard (costly + time-consuming + introduces a lot of confounds) to evaluate multiple different learning approaches at the full-scale of the real-world evaluation (though I really appreciate the experiments in section 5.2 that does this for a limited number of rounds). That being said, I would love to look at other agent learning paradigms, and justify the use of the 1-step contextual bandit style reward, vs. a multi-step RL approach that automatically learns how to perform credit assignment given the sporadic binary feedback. Especially using some of the more recent tools in the RL toolbox (e.g., learning a value function/advantage function, PPO-style clipped updates, or even off-policy/offline RL methods).
A general worry I have is that much of this paper hinges off the design of the reward/utility function, which starts from the principled binary feedback provided by users, but is then further processed through a series of heuristics that may or may not really capture what's going on (for example, how the current heuristic labels transitions when there isn't an explicit binary reward tied to that timestep). I would love for the authors to address these choices in a bit more detail.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What motivated the choice of using clipped IPS coefficient for the policy gradient update vs. a PPO-style update (with a learned one-step value function estimator for debiasing); given the choice of using REINFORCE policy gradient anyway for the 1-step update, this would've been an easy addition, and possibly more stable?
I'd be curious to see a more detailed breakdown of the types of language instructions (and *when* in an episode a user usually provides feedback) across the different rounds; are there more complex instructions/abstractions learned over time? Do users intervene differently in different rounds? What are the failure modes?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper provides a thorough limitations section (in the appendix) that is well thought out and clearly states weaknesses in the current approach. It would be nice to move some of the punchlines to the main body in the final version of the paper though!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions. We are looking forward to answering any follow up questions during the discussion period.
Other methods: We agree that experimenting with different learning methods is a good direction for future work; in this case, we opted for simplicity especially in light of only having a small amount of data to learn from (methods which require learning multiple models, such as a value function, may have stronger requirements in the amount of data used for training), and working in a complex dynamic process where agent observations are coming from a policy-dependent distribution (i.e., because users adapt their language and behavior to their interactions with the agent). The paper also explores uncharted territory, so we figured simplicity is the best way to approach it, even if higher performance is likely with more complex methods. As the reviewer notes, cost was also a factor, as more complex methods often require more tuning. We hope our work paves the way for future work in this area, including along the directions the reviewer mentions.
Heuristics: We manually developed heuristics initially after early experiments showed they were useful. In general, the rate of “errors” assigned by the heuristics was very low (i.e., nearly always reflected the instruction’s intent), so heuristics serve to densify reward in our contextual bandit setup. Surprisingly to us, in our comparison experiments (Section 5.2) we found they were actually not as influential as initial pilot experiments showed; however, there is some evidence they sped up learning early on. So, an important takeaway is that the effectiveness of the approach does not necessarily rely on very involved heuristic reward design (although future work might reveal better approaches that do make significant difference).
Policy gradient vs. PPO-style learning: see above under “other methods”. This was motivated by preference to simplicity, especially given costs of studies and the goal of rapid updates with a very limited amount of data. Early on, we conducted pilot studies with more complex methods (e.g., COACH by MacGlashan et al. 2017, which is designed to learn from feedback), but they didn’t work well. That said, there were too many confounding factors to draw strong conclusions about the algorithm, except that getting them to (potentially) work is more complex. An interesting direction for future work now that we showed the whole process works is to apply more localized changes to the optimization algorithm (e.g., switching to PPO as you suggest) and understanding the implications. The rationale for using IPS is (a) to avoid exploding gradients and (b) de-bias (end of Section 5).
Analysis: We didn’t find evidence that users built abstractions or relied on more complex instructions over time; in fact, there is some evidence that they simplified their instructions (e.g., decreasing the rate of multi-card instructions), and reducing the number of references to objects in the environment. We don’t think the CerealBar stimuli is designed to elicit abstractions. In terms of interventions, we found that the rate of user intervention via reboots goes down significantly over time (as the agent gets better). Also, while this is not reported in the paper, we found that users shifted more labor to the agent over time: the average number of steps the follower took per set increased from 14.8 to 15.3 over the long-term experiment, while the number of steps for the leader decreased from 10.0 to 8.9 steps. We can add this analysis to the main paper. If by failure modes you are referring to common errors in instruction execution, Figure 3 (right) reports such categories and their trends via manual analysis, with discussion in Section 5.1.
Thank you for the suggestion to move limitation highlights to the main text! We will certainly do this if the paper is accepted.
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: Thanks for responding to my review! I think this is a very strong paper, and highly encourage the other reviewers to engage with the authors and consider raising their scores as well! | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments and questions! We look forward to continuing the discussion here during the discussion period. Below are responses to general comments.
Experimenting with different learning paradigms: While we use an objective based on the popular policy gradient REINFORCE objective, we use it in a new learning process that is a new contribution of this work (i.e., learning through live interactions with human users who provide feedback in real time), and demonstrate its efficacy. There are many avenues for future work, including trying more complex RL algorithms such as off-policy learning algorithms or PPO. We opted for simplicity here, focusing on sample efficiency, because of the complexity and costs of studies with human users. And, mostly because this kind of problem is relatively understudied, so we decided to start simple. This is a first step, and we hope our work paves the way for more work in this exciting area, which has so many interesting open problems!
Reproducibility: Conducting these learning experiments at scale (including addressing technical challenges involved in successful deployment and training, but also gaining insights from their deployment, e.g., showing the robustness of our approach to system variations,and analyzing user adaptation over time) and their results are key contributions of our paper. While learning from interactions with human users does pose challenges for reproducibility (i.e., from recruiting and managing a user base), these challenges are unavoidable when studying such complex interaction scenarios. For example, relying on manually designed user simulators will place assumptions on how interactions proceed, require significant engineering effort, and will not reflect the dynamics that arise in real human-agent interaction over time. Indeed, we put significant engineering effort.We are releasing everything, including our results, data, and platform (its original version was released as CerealBar in the past, but we modified it to capture feedback and conduct our studies), all under the permissive MIT license. This is another contribution of our paper: a platform that enables conducting such experiments, and will save much of the engineering effort we put in for future research. Our code and data is also attached to the supplementary material.
Model: The model design is not a main contribution of our work, so we put details in the Appendix. Our model is a modification of the neural architecture used by previous work (Suhr et al. 2019); we chose to adapt an existing model that has been shown to work on this domain, and focus our main contributions on the learning problem itself. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors work on the CEREALBAR setting, where two agents (one human and one computer) cooperate using natural language to achieve a shared goal. Specifically, the authors propose a new setting where the human agent can provide binary feedback to the computer agent. In their new setting, the authors follow the contextual bandit setting and use a REINFORCE-like algorithm to train the agent. Overall, experiments show that the agent achieved an improvement by learning from human signals.
Strengths: 1. The authors explore a setting that involves real-time human feedback, which may be under-explored.
2. The experiments involve significant human labour and may be beneficial to the community if the authors share their data.
Weaknesses: 1. Limited contribution. The authors simply applied existing algorithms to their settings.
2. Experiments show little insight. Their experiments demonstrate that the agent improves by receiving more human feedback, which is expected.
3. (Minor) Lack of concrete numbers. Results are mostly shown in figures, and it's difficult to see the real performance numbers.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Why do you use a contextual bandit rather than the full RL setting? This is a bit unintuitive since you have a sequential decision-making process.
2. In Figure 3, it seems the performance is the highest at around round 5. Is there a reason for this?
3. Do you plan to release the collected human feedback data? I checked the data folder in the supplementary file but didn't see them.
4. What models are you using? The paper mentions neural networks, but could you please give more details?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are discussed in the Appendix.
For example, the authors openly disclose that they did not use a more modern architecture for their agent.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions. We are looking forward to answering any follow up questions during the discussion period.
Contributions: As the reviewer suggests, indeed, learning from real-time human feedback in embodied interactions is certainly an underexplored area. There is a small amount of existing work in robotics that used real-time feedback to train non-language policies, from both explicit (e.g., TAMER – Knox et al. 2009, COACH – MacGlashan et al. 2017) and implicit (e.g., EMPATHIC – Cui et al. 2020) human feedback. But all this is not focused on natural language. Reinforcement learning from human feedback (RLHF, as in Ziegler et al. 2019 and more recently) is focused on preference-based feedback (i.e., comparing two possible outputs) provided by annotators outside of the interaction context; this work is also not embodied. While our work builds on existing techniques, our learning process as a whole, including experimenting with different ways of mapping feedback to rewards, is a new contribution. The objective we use is a variation of the well known policy gradient REINFORCE objective (although we use it in a bandit setting), but it's part of a complete learning process that is our contribution.
Data sharing: We will share all our data publicly under the MIT license for maximum usability (briefly noted in the paper). The data is included in our supplementary material, in the `game_recordings/` directory; the `data/` directory includes classes and utilities for managing the data. We apologize for the confusion.
Numbers in figures: We specify key numbers in the text. We don’t discuss concrete numbers from Figure 3 (left) or Figure 4 (right) and only highlight general trends in the text due to space limitations. The figures try to balance readability of trends with accuracy. We will add concrete discussion of these two subfigures into the main text, with the exact key numbers. We will also release the complete raw data underlying each of the plots.
Questions:
Q1, contextual bandit (CB) objective: Generally, there's a significant credit assignment challenge in a sequential decision problem like this one, and naively computing discounted returns can lead to many wrong reward assignments. CB avoids this, even if at the expressivity cost of restricting the reward to a single step. Studying better solutions to the credit assignment problem is an important problem for future work. Note that most recent RLHF work also uses a contextual bandit reward, albeit even more restricted because reward is assigned to the complete output (even though text generation can be cast as a sequential decision process). So we are not the only ones opting for this simplification. Also, theoretically, CB has sample complexity advantages with much tighter sample complexity bounds when comparing upper bounds for contextual bandits (Langford and Zhang, 2007), even with an adversarial sequence of contexts (Auer et al., 2002), to lower bounds (Krishnamurthy et al., 2016) or upper bounds (Kearns et al., 1999) for total reward maximization. This is critical in our scenario where we learn rapidly from few examples.
Q2, performance in Figure 3: Figure 3 shows likert scores for post-interaction user questions and error categories, so we are unsure of where this conclusion is being drawn from – could you please clarify where in Figure 3 there’s evidence that performance is highest around Round 5?
Q3, feedback data release: Yes, please see above. The data is included in the `game_recordings/` directory of the supplementary. It will be released under MIT license.
Q4, models: We use a model based on prior work (Suhr et al. 2019), where the inputs (instruction and observation) are embedded and a modified LingUNet (Blukis et al. 2018) is used to predict an action. This is specified in Section 4, and details of the model architecture are available in the Supplementary Material.
---
Rebuttal Comment 1.1:
Title: Reply to the author
Comment: Thank you for the response. I really appreciate it!
As for contribution, I think the dataset can be beneficial for the community. However, I feel more analysis can be really helpful if the entire learning process is to be claimed as a major contribution. Currently, the experiments show that more human data improves performance, but I think that is not surprising and doesn't fully justify the approach. I feel more baseline approaches can help improve the paper.
My question was about the left-hand side of Figure 3, where "strongly agree" usually peaks at around round 5.
---
Reply to Comment 1.1.1:
Comment: We conduct analysis via deployment experiments comparing variants of our approach (finding it robust to several modifications, that using negative feedback helps, and that we can roughly match performance of a learning approach that requires significantly more expensive training data), analysis of how user adaptation occurs and the extent of its influence on perceived model performance, as well as analysis of model errors and how they change over time through continual learning. We are happy to add additional analyses that the reviewers consider as important to demonstrate the effectiveness of our approach, and provide further insight into our experiments.
With respect to Figure 3, the broader trend of agreement with the statements (agree + strongly agree) are increasing consistently over the rounds (likewise, rates of disagreement are also consistently decreasing over time). It’s critical to consider the full range of ratings for a complete view of trends. | null | null | null | null | null | null |
An Adaptive Algorithm for Learning with Unknown Distribution Drift | Accept (poster) | Summary: The paper proposes an algorithm for environments with changing distributions without assuming a priori knowledge about the change in distributions. The proposed algorithm provides error bounds that decrease with the number of time steps.
Strengths: The idea of considering independent distributions with distribution shift is unexplored.
The algorithm itself is a solid technical contribution, and the theoretical results are strong
Weaknesses: Numerical experiments are not included.
Assumptions need further justification.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How strong is the assumption on the distributions? What scenarios satisfy such assumptions, are there any real examples?
The reviewer believes that the problem formulation needs further explanation (e.g., what is to be learned at each instant of time).
Numerical results could help to see the superiority of the presented method.
The references are a bit old. There are more recent related works:
“Minimax Classification under Concept Drift with Multidimensional Adaptation and Performance Guarantees.”
“Adaptive online learning in dynamic environment.”
“Random feature based online multi-kernel learning in environments with unknown dynamics.”
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Numerical experiments are not included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and for your feedback.
---
**Question**: The idea of considering independent distributions with distribution shift is unexplored.
**Answer**: We study the classical setting where we have a sequence of drifting distributions, but the samples from those distributions are independent.
As we discuss in the introduction, this idea is not unexplored, see e.g. [B,C] (additionally, references in lines 20-26). Our work improves upon the results of this line of research, solving an open question posed in [A] in this setting.
----
**Question**: Assumptions need further justification. How strong is the assumption on the distributions? What scenarios satisfy such assumptions, are there any real examples?
**Answer**: This classical setting has been justified and studied in a long line of research, see lines 20-26 in the paper and references therein. The work [B] provides a good discussion of this setting. We are addressing an open theoretical question within this line of research [A], improving upon previous work.
*In detail*:
In Section 5, we discuss the assumptions introduced in the paper for the major problem of binary classification.
For Assumption 2, we provide an additional discussion in lines 312-325 in the Conclusion, and also in Appendix B.
As we describe in the paper, Assumption 1 is related to learnability, and we discuss sufficient conditions in lines 78-83 for it to hold. This is a very well-studied problem in the learning community, and indeed, these constants depend on the family $\mathcal{F}$, and a general discussion on how to obtain them is out of the scope of this paper.
----
**Question** The reviewer believes that the problem formulation needs further explanation (e.g., what is to be learned at each instant of time).
**Answer** The goal and the problem formulation are specified in Lines 60-62, and further discussed in the following lines. We want to estimate the expectation of all the functions in a family $\mathcal{F}$ with respect to the current distribution $P_T$. This formulation of learning and notation is common in the statistical learning/empirical process community (e.g., [D] ).
---
**Question**: Numerical results could help to see the superiority of the presented method.
**Answer**: The nature of our work is theoretical, and there is no other equivalent method to compare to. We improve upon a long line of research by solving an open question [A]: we show that we can obtain the same results of the previous work without knowing the drift a priori. Our results are also tight in a mini-max sense (e.g., Theorem 7 for binary classification).
----
**Question**
The references are a bit old.
There are more recent related works:
1. “Minimax Classification under Concept Drift with Multidimensional Adaptation and Performance Guarantees.”
2. “Adaptive online learning in dynamic environment.”
3. “Random feature based online multi-kernel learning in environments with unknown dynamics.”
**Answer**: Thank you for the additional references that we could add to the paper. We believe that we discuss the most relevant line of research in the introduction and related work. Our work solves an open question that was posed in a **recent** paper [A].
The setting, the assumptions, and the main results of the mentioned papers are significantly different from our work. E.g:
1. The concept drift follows a specific dynamic described in that reference. Our paper addresses a more general setting.
2,3. Among the clearest differences, the measure of error is regret rather than the learning error for the distribution at the current time.
----
[A]: Steve Hanneke and Liu Yang. Statistical learning under nonstationary mixing processes. AISTATS, 2019.
[B]: Mohri, Mehryar, and Andres Muñoz Medina. "New analysis and algorithm for learning with drifting distributions." ALT, 2012.
[C]: Philip M Long. The complexity of learning according to two models of a drifting environment. COLT, 1998.
[D]: Sen, Bodhisattva. "A gentle introduction to empirical process theory and applications." Lecture Notes, Columbia University 11 (2018): 28-29.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. You have addressed all my questions/concerns
After reading the rebuttal and other reviews, my recommendation remains the same. | Summary: This paper under considers the theoretical problem of determining the sliding window size for empirical risk minimization in the presence of unknown distribution changes. The proposed method aims to enable the learning of a classifier comparable to approaches that have prior knowledge of the distribution change magnitude.
Strengths: + The paper's motivation is commendable, as it aims to tackle the challenge of non-stationary sequential learning, specifically with unknown distribution shifts. This is an important problem, and the proposed technique could potentially contribute to its advancement.
+ Although not evaluated extensively, the results appear reasonable. While I haven't reviewed the proofs in detail, they seem sound.
Weaknesses:
- The paper relies on computability assumptions for key variables, such as the distribution discrepancy (e.g., $\|P^r_T - P^r_T\|_{\mathcal{F}}$) and the constants ($C_{\mathcal{F},1}$ and $C_{\mathcal{F},2}$). However, the paper lacks a discussion on when these assumptions hold, how solvable they are, or how accurately they can be estimated or upper bounded. In comparison to previous methods that estimate distribution shifts, it is unclear whether estimating these parameters would lead to a more practical algorithm. This aspect requires further analysis and discussion.
- The authors claim that their proposed algorithm aims to provide a practical solution to handling unknown sequential distribution shifts without prior knowledge. However, the proposed algorithm introduces new parameters that are challenging to estimate or even tightly upper bound, such as the distribution discrepancy (assumption 2). While the new analysis techniques presented may contribute to addressing the problem, the overall contribution seems limited, as the proposed method essentially shifts the difficulty from estimating the distribution shift magnitude to estimating other parameters.
- The paper lacks a thorough discussion on when Assumption 2 holds, especially in practical scenarios. This assumption is crucial, as the proposed method relies on estimating the distribution discrepancy and other constants that evolve the hypothesis class $\mathcal{F}$. Without addressing the practicality of these estimations, the proposed method's feasibility remains questionable. It would be beneficial to provide a detailed comparison of the estimation difficulty for the parameters required in the proposed method compared to previous works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It is important to explicitly discuss the computability assumptions made in the paper and discuss when and how they hold. This discussion should include considerations of solvability, accurate estimation, or upper bounding of variables such as the distribution discrepancy and the constants. So it is suggested to provide an in-depth analysis of Assumption 2 and other computability assumptions on the constants mentioned above and its applicability in practical scenarios.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your work in reviewing our manuscript, and for providing thorough feedback.
---
**Question**: The paper relies on computability assumptions for key variables, such as the distribution discrepancy (e.g., $|P^r_T - P^r_T|_{\mathcal{F}},
C{\mathcal{F},1}
C_{\mathcal{F},2}$). However, the paper lacks a discussion on when these assumptions hold, how solvable they are, or how accurately they can be estimated or upper-bounded. In comparison to previous methods that estimate distribution shifts, it is unclear whether estimating these parameters would lead to a more practical algorithm. This aspect requires further analysis and discussion.
**Question**: The paper lacks a thorough discussion on when Assumption 2 holds, especially in practical scenarios. This assumption is crucial, as the proposed method relies on estimating the distribution discrepancy and other constants that evolve the hypothesis class
$\mathcal{F}$. Without addressing the practicality of these estimations, the proposed method's feasibility remains questionable [...].
**Answer**: Section 5 gives a thorough discussion of both Assumptions 1 and Assumptions 2 for the important case of binary classification.
For Assumption 2, we provide an additional discussion in lines 312-325 in the Conclusion, and also in appendix B.
As we describe in the paper, Assumption 1 is related to learnability, and we discuss sufficient conditions in lines 78-83 for it to hold. This is a very well-studied problem in the statistical learning community, and indeed, these constants depend on the family F. A general discussion on how to obtain them is out of the scope of this paper.
----
**Question**: The authors claim that their proposed algorithm aims to provide a practical solution to handling unknown sequential distribution shifts without prior knowledge. However, the proposed algorithm introduces new parameters that are challenging to estimate or even tightly upper bound, such as the distribution discrepancy (assumption 2). While the new analysis techniques presented may contribute to addressing the problem, the overall contribution seems limited, as the proposed method essentially shifts the difficulty from estimating the distribution shift magnitude to estimating other parameters.
**Answer**: We respectfully disagree with this characterization, because previous work required prior knowledge, whereas in our work we obtain similar results while only relying on quantities that can be estimated from the data. Moreover, we completely characterize those assumptions for the major problem of binary classification, improving upon a long sequence of work in that area (Section 5). In the general case, the assumptions depend on $\mathcal{F}$.
We also remark that in this setting it is not possible to estimate the magnitude of the distribution drift since there is only one sample from each distribution.
*More details*:
The constants $C_{\mathcal{F},1}$ and $C_{\mathcal{F},2}$ are related to the complexity of learning the family $\mathcal{F}$. This is a well-studied problem within the statistical learning community, and these constants depend on the family $\mathcal{F}$.
The hardness of evaluating the discrepancy in the general case is not a limitation unique to our work, and it is a known limitation for work on transfer learning/domain adaptation, as we remark in the Conclusion. In fact, even with access to more samples from each distribution, previous work has the challenge of estimating the discrepancy (e.g., [C,D]). In our case, we show a major setting (binary setting) in which it is efficiently computable even with limited data. There are other cases for which the computation of the discrepancy is known to be possible [B] (e.g., regression with squared loss)
----
References:
[B]: Mansour, Yishay, Mehryar Mohri, and Afshin Rostamizadeh. "Domain adaptation: Learning bounds and algorithms." COLT, 2009
[C]: Mohri, Mehryar, and Andres Muñoz Medina. "New analysis and algorithm for learning with drifting distributions." ALT, 2012.
[D]: Awasthi, Pranjal, Corinna Cortes, and Christopher Mohri. "Theory and algorithm for batch distribution drift problems." AISTATS, 2023.
---
Rebuttal Comment 1.1:
Comment: After checking the rebuttal and other reviewers' comments, I find that the response does not adequately address my concerns. Main weaknesses still exists:
1. The proposed theorem requires an accurate estimation of the distribution discrepancy, which is also related to the hypothesis classes. This is hard to estimate and verify in real applications and thus limits the application scope of this work.
2. The authors claim that "We respectfully disagree with this characterization, because previous work required prior knowledge, whereas in our work we obtain similar results while only relying on quantities that can be estimated from the data." From a theoretical point of view, it seems to be overclaimed, see [1,2] and the following papers and references therein (It is also suggested to include the discussion with this line of works in the main paper). In my opintion, the new analysis techniques proposed in this draft may contribute to addressing the real-world problem associated with general model and loss functions, but the theoretical results are still rather preliminary and overall contribution and seems limited. Also, there are no experiments.
Therefore, I tend to remain my sorce.
[1] Cutkosky, A.. Parameter-free, dynamic, and strongly-adaptive online learning. In ICML, pp. 2250-2259, 2020.
[2] Wei, C. Y., & Luo, H. Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach. In COLT, pp. 4300-4354, 2021.
---
Reply to Comment 1.1.1:
Title: Response
Comment: We thank the reviewer for their throughout feedback, and for providing a detailed response to our rebuttal.
We would like to provide a response to the above points.
----
Previous work (2): "From a theoretical point of view, it seems to be overclaimed [...]"
We thank the reviewer for providing these additional references. While indeed these papers are addressing a similar problem as they have distribution drift, the setting is clearly different. In these papers, the goal is to minimize the regret.
Our claim is with respect to the different problem that is addressed in our paper: we focus on the best possible learning error at the current given time given the past observations. We follow the line of research discussed in the paper (e.g., [2,4]) and we provide an answer to the **open problem** of the paper in [4] (see the end of Section 3 of that paper): how to find a learning algorithm that is adaptive (based on the data) with respect to the drift and theoretically as good as algorithms that known the drift a priori.
----
Computability of discrepancy (1)
a) While indeed it is true that the estimation is hard in the general case, we characterize this estimation and assumption for a binary family of functions (including binary classification). We devote a whole section (Section 5) to this setting. This is a major setting in Machine Learning. We also show that the output of our algorithm is tight in a minimax sense in this setting.
b) We remark that the notion of discrepancy is not our contribution, it was introduced in previous work [1], and it has been used in the previous research mentioned in the paper and in our rebuttal. The discrepancy can be statistically estimated by the data, but its computation can be hard in the general case, indeed this is not a limitation unique to our work on drift [2,3], see in particular discussion in [2].
c) While we could have only focused on the binary setting, we think that the presentation as $F$ being a general family of functions is a contribution, as it allows the trivial application of our algorithm to other settings. Indeed, there are other important families for which it is easy to compute the discrepancy. Examples:
- if the domain $X$ is the real line, and $F$ contains the identity function only, this is a mean estimation problem;
- if $F$ contains an indicator function for each possible subset of elements in a finite domain, this is a discrete density estimation (under total variation distance).
Both examples point to other important learning problems in which the discrepancy is trivial to compute and easily fit within our framework.
----
[1]: Mansour, Yishay, Mehryar Mohri, and Afshin Rostamizadeh. "Domain adaptation: Learning bounds and algorithms." COLT, 2009
[2]: Mohri, Mehryar, and Andres Muñoz Medina. "New analysis and algorithm for learning with drifting distributions." ALT, 2012.
[3]: Awasthi, Pranjal, Corinna Cortes, and Christopher Mohri. "Theory and algorithm for batch distribution drift problems." AISTATS, 2023.
[4]: Steve Hanneke and Liu Yang. Statistical learning under nonstationary mixing processes. AISTATS, 2019. | Summary: The author propose a general algorithm to learn a family of functions with respect to the current distribution at time T. This algorithm achieve a drifting-instance-dependent bound without any prior knowledge of the drift. Based on this, the author further analyze a tractable algorithm on binary classifier.
Strengths: 1. The paper solves an open problem of drifting distribution with unknown prior. The proof technique is clear and solid.
2. It's further analysis on binary classification is inspiring.
Weaknesses: 1. The error bounded is only measured on windows that end at T. Instead, a better goal would be to select a window from t1 to t2. As an example of why this problem could be more relevant, there might be large drifts nearby T but early distributions are close to P_T.
2. The techniques used in this paper is not very novel. Specifically, although the authors claim they do not directly estimate the drift error, it is still drift detection only with tolerance towards estimation error. The doubling trick is intuitive and not surprising here.
3. As the authors admitted in our paper, this algorithm is very general and might be computationally intractable in more complicated cases. I think the author could discuss the benefit of their algorithm versus existing work on cases beyond binary classification. They may do this by showing either further proofs or empirical evidence. Currently, it is not immediate to me the value of an algorithm that ignores assumptions on the magnitude of the drifts.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Why is empirical estimate based on a window up to T better than empirical estimates based on any consecutive window?
What's the value in getting rid of prior knowledge on drift magnitude in practice or in theory?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your close reading of our paper and for your feedback.
---
**Question**: The error bounded is only measured on windows that end at T. Instead, a better goal would be to select a window from t1 to t2. As an example of why this problem could be more relevant, there might be large drifts nearby T but early distributions are close to P_T.
**Question**: Why is empirical estimates based on a window up to T better than empirical estimates based on any consecutive window?
**Answer**: One can modify our algorithm and analysis to handle intervals ending at any point in the past. However, without prior knowledge (such as periodicity) the most reasonable assumption is that observations that are closest in time provide the most relevant information. For this reason, a long line of research in this area considers windows that end at T (e.g., references in lines 20-26 in the paper); we improve upon this line of work, solving the open question in [A] for this setting.
We remark that a properly chosen window that ends at T provides a minimax tight estimation under simple assumptions (e.g., bounded drift at each step), see line 24, line 247, and Theorem 7.
----
**Question**: The techniques used in this paper is not very novel. Specifically, although the authors claim they do not directly estimate the drift error, it is still drift detection only with tolerance towards estimation error. The doubling trick is intuitive and not surprising here.
**Answer**: We respectfully disagree. We present the first algorithm that does not rely on any prior knowledge of the drift but only uses empirical measures learned from the data. We think this is a major contribution that solves an open problem posed in [A]. Obviously, the widely popular doubling trick is not a contribution of this paper.
Our question to the reviewer is why do they not think this is not a novel contribution?
----
**Question**: As the authors admitted in our paper, this algorithm is very general and might be computationally intractable in more complicated cases. I think the author could discuss the benefit of their algorithm versus existing work on cases beyond binary classification. They may do this by showing either further proof or empirical evidence. Currently, it is not immediate to me the value of an algorithm that ignores assumptions on the magnitude of the drifts.
**Answer**: Binary classification is a major problem in machine learning. We think that even just this result (Section 5) is a major contribution, as previous existing work relied on a priori assumption on the drift. In the general case, the hardness depends on the family F (as in almost all ML applications). There are other cases for which the computation of the discrepancy is known to be possible [B] (e.g., regression with squared loss). See also our discussion in lines 312-325
The hardness of evaluating the discrepancy is not limited to our technique, and it is a known limitation for work on transfer learning/distribution drift. In fact, even with access to more samples from each distribution, previous work has the challenge of estimating the discrepancy (e.g., [C,D]). In our case, we show a major setting (binary setting) in which it is efficiently computable even with limited data.
----
**Question**: What's the value in getting rid of prior knowledge on drift magnitude in practice or in theory?
**Answer**: We eliminated the requirement of prior knowledge on the drift, providing similar guarantees without this knowledge and only relying on the data. This addresses an open problem in [A] prompted by the following reason: prior knowledge of the drift is unrealistic in practice.
----
References:
[A]: Steve Hanneke and Liu Yang. Statistical learning under nonstationary mixing processes. AISTATS, 2019.
[B]: Mansour, Yishay, Mehryar Mohri, and Afshin Rostamizadeh. "Domain adaptation: Learning bounds and algorithms." COLT, 2009.
[C]: Mohri, Mehryar, and Andres Muñoz Medina. "New analysis and algorithm for learning with drifting distributions." ALT, 2012.
[D]: Awasthi, Pranjal, Corinna Cortes, and Christopher Mohri. "Theory and algorithm for batch distribution drift problems." AISTATS, 2023. | Summary: This research paper presents a straightforward algorithm designed to facilitate adaptive learning of models in the presence of distribution drift. The algorithm is specifically designed to adapt to changing data patterns without requiring any prior knowledge of the drift. Moreover, the paper provides a proven bound that guarantees the learning error of the algorithm. Overall, the paper is well-written. The simplicity of the algorithm contributes to its comprehensibility, making the results easily understandable. Additionally, the inclusion of a detailed illustration showcasing the algorithm's effectiveness in handling binary classification with distribution drift serves as a strong validation of the proposed methodology.
I think this is a breakthrough work for learning under non i.i.d data as it does not require prior knowledge of the potential drift. However, this could also be one of the limits of this paper. The algorithm proposed is to adaptively find r that can achieve the best trade-off between the statistical and drift error. Drift sometimes might represent an upcoming arrival of a new distribution. The proposed algorithm might get a competitive average accuracy compared to those algorithms that know the magnitude of drift in advance. However, I am not sure if this proposed algorithm will perform equivalent good if we test it on every sequential portion of the data as well. For example, a classifier of labeling every instance as 0 could also achieve an accuracy of 90% on the data where 90% of instances are 0 class. In this case, I get very limited information from its good results.
Overall, I think the proposed methodology provides a new way of designing the learning algorithms under drift.
Strengths: Overall, I think the proposed methodology provides a new way of designing the learning algorithms under drift.
I also agree that r would be an important factor during learning under drift. Therefore, adding this to the learning boundary, from my perspective, is a good start of conducting the following theoretical learning works that discuss the non-identically distributed problem. Theorem 1 could be a quite useful guidance for the following studies in this area.
Weaknesses: As I have mentioned in the summary, Theorem 1 only provides a very general bound. The r is considered as sequentially increased. However, I think the usage of Theorem 1 is very limited if we apply it to learning sequential data. As for the sequential data, there should be always new data coming and thus very possibly new distribution in the upcoming instances. It is less practical to design the r as monotonically increasing if you don't consider the magnitude of the drift. The Inequality in Algorithm 1 is always comparing P_t between it is at r_i and r_{i+1}. This means the i is only updated when the distribution drift is large enough among two consecutive i. However, this is not true. Therefore, this algorithm removes the assumption of knowing the magnitude of distribution drift in advance, but for me, it actually adds a hidden assumption that the magnitude of distribution drift should be enough big between at least one of two consecutive i in [1, T].
Let's think of the situation that the magnitude of distribution drift is incremental increasing by i. I am not sure if your algorithm can output the expected r in such a case. But for those algorithms that assume a known magnitude of drift in advance. They can complete this task.
As is also mentioned in this paper that an adaptive algorithm with respect to the drift that uses distribution-dependent upper bounds
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Theorem 1, r is the output of Algorithm 1. What will be the limits if using other algorithms?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some of the limitations have been listed in section 7. As is discussed, satisfying Assumption 2 is challenging and authors already discuss possible solutions.
From my perspective, the author claims their contributions as removing the prior knowledge of distribution drift. I appreciate their contribution. But as a reader, I am more curious if we do get some prior knowledge of the distribution drift, how can we use that knowledge to improve what you have presented in this work? I think this will provide more insights of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for taking the time to review our paper and for your feedback.
-----
**Question**: "The algorithm proposed is to adaptively find r that can achieve the best trade-off between the statistical and drift error. The proposed algorithm might get a competitive average accuracy compared to those algorithms that know the magnitude of drift in advance. However, I am not sure if this proposed algorithm will perform equivalent good if we test it on every sequential portion of the data as well. For example, a classifier of labeling every instance as 0 could also achieve an accuracy of 90% on the data where 90% of instances are 0 class. In this case, I get very limited information from its good results."
**Answer**: Indeed, this is the first algorithm that doesn’t rely on prior knowledge about the drift. The algorithm provides an optimal solution for the aforementioned trade-off for any given time T (Theorem 1) (not an average over T). Its solution is provably as tight as the one of other algorithms that require knowing the magnitude of the drift in advance, solving the open question in [A].
----
**Question**: "[...] The Inequality in Algorithm 1 is always comparing P_t between it is at r_i and r_{i+1}. This means the i is only updated when the distribution drift is large enough among two consecutive i. However, this is not true. Therefore, this algorithm removes the assumption of knowing the magnitude of distribution drift in advance, but for me, it actually adds a hidden assumption that the magnitude of distribution drift should be enough big between at least one of two consecutive i in [1, T]"
**Question**: "Let's think of the situation that the magnitude of distribution drift is incremental increasing by i. I am not sure if your algorithm can output the expected r in such a case. But for those algorithms that assume a known magnitude of drift in advance. They can complete this task.
It is not true that we require the drift to be big enough between two consecutive i in [1,T]."
**Answer**: r_i and r_{i+1} are not two consecutive time steps. r_{i+1} = 2r_i (see line 156). As i increases, this gap is sufficient to detect even small gradual drift, as also discussed in the example in lines 241-256 in the paper.
----
**Question**: In Theorem 1, r is the output of Algorithm 1. What will be the limits if using other algorithms?
**Answer**: Algorithm 1 provides an optimal solution of the trade-off, as proven in Theorem 1, which is tight up to constants for binary classification (Theorem 7, minimax). There are no other algorithms to compare to since we are addressing an open question, and there are no other algorithms that provide such guarantees.
----
**Question**: From my perspective, the author claims their contributions as removing the prior knowledge of distribution drift. I appreciate their contribution. But as a reader, I am more curious if we do get some prior knowledge of the distribution drift, how can we use that knowledge to improve what you have presented in this work? I think this will provide more insights of this work.
**Answer**: The goal of our work is to eliminate the requirement of prior knowledge of the drift, providing similar guarantees without this knowledge. This prior knowledge is unrealistic in practice, and this is the main reason that prompts the open question of [A].
----
References.
[A]: Steve Hanneke and Liu Yang. Statistical learning under nonstationary mixing processes. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1678–1686. PMLR, 2019
---
Rebuttal Comment 1.1:
Comment: Thanks for your replies.
"r_i and r_{i+1} are not two consecutive time steps. r_{i+1} = 2r_i (see line 156). As i increases, this gap is sufficient to detect even small gradual drift"
By saying "Let's think of the situation that the magnitude of distribution drift is incremental increasing by i. I am not sure if your algorithm can output the expected r in such a case. But for those algorithms that assume a known magnitude of drift in advance. They can complete this task. It is not true that we require the drift to be big enough between two consecutive i in [1,T].",
I didn't mean your method cannot detect the drift. I was questioning if your algorithm can output the expected r. As I doubted, "it is less practical to design the r as monotonically increasing", i here increases fast by iterations. If a drift is detected given a large i, does it mean this detection is largely delayed compared to its ground truth when this drift just occurs?
---
Reply to Comment 1.1.1:
Title: Reply to Comment
Comment: Thank you for your quick reply and for clarifying this question.
In our work, we do not explicitly address a drift detection problem, as our goal is to provide an optimal learning with respect to the current distribution. The drift detection problem is not trivial to define in our setting, since each distribution could be different.
Indeed, there is a window size $r^*$ that optimally minimizes the trade-off between statistical error and drift error in the estimation.
Our algorithm provides a window size $\hat{r}$ that yields an estimation error which is up to constants as good as the minimum given by $r^*$ [Theorem 1]. That is, the learning error of our algorithm is as good as if we had access to the unknown ground-truth about the drift, and we could compute $r^*$. This is a formal guarantee of the algorithm [Theorem 1], and we do not have any restrictions on the drift: this is the case for both an abrupt drift or a gradual drift. Note that we do not claim that $\hat{r} \simeq r^*$, and it is possible that $\hat{r}$ is possibly smaller or larger than $r^*$. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Model Spider: Learning to Rank Pre-Trained Models Efficiently | Accept (spotlight) | Summary: This paper investigates how to select the most suitable PTM given a target task efficiently and accurately.
A novel approach called Model Spider has been proposed. It learns to encode both PTMs and tasks into vectors and measures their similarity, which is further used to rank the PTMs. It can also incorporate task-specific forward results of PTMs for more accurate re-ranking when resources budgets allow.
Extensive experiments have been conducted to verify the effectiveness of the proposed method.
Strengths: 1. The idea of encoding PTMs and downsteam tasks into vectors for PTM ranking is well-motivated.
2. The overall presentation of the proposed method is well-organized and generally easy to follow.
Weaknesses: 1. The generalization ability of the proposed method has not been well-verified. The supervised training method is known to result in models with poor generalization ability. It seems that the proposed method heavily relies on the frozen encoder $\psi$ to capture the relevance between different tasks. What if the new task is quite different from the tasks used in training? When evaluating the proposed method, all the downstream tasks are about image classification. The authors should evaluate the generalization ability of the proposed method with more diverse downstream tasks.
2. The proposed method is highly related to the classification task as it uses the class centers as the task token in Equation (5). However, many tasks do not have such "classes", such as regression tasks and generation tasks. How to adopt the proposed method to such tasks has not been elaborated.
3. Only 10 PTMs are used when evaluating the proposed method on a single-source model zoo, which is not enough. It is recommended to evaluate the proposed method in the NLP domains, where there are numerous pre-training models and diverse downstream tasks, which can be used to better verify the generability of the proposed method to a new task.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: In addition to the questions above, I have the following questions:
1. What's the motivation for using Equation (4) to train the model to capture the ranking order? There are many other methods that can train the model to learn to rank, such as the pairwise BPR and listwise ListMLE. It is recommended to verify the impact of different ranking loss functions.
2. Does the proposed method require re-training whenever a new PTM comes? If so, the proposed method seems costly when applied in real-world scenarios.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No, the authors have not discussed the limitations of the proposed method. The authors are recommended to explain whether the proposed method is limited to classification tasks and how to adapt it to other tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank Reviewer mKeC for the valuable insights and thoughtful questions**. The feedback enhances our work's clarity and robustness. Here is our response:
**Question 1**: Generalization ability and dependency on frozen encoder $\psi$. The new task differs from the training ones.
**Answer 1**: **Our method, Model Spider, showcases broad task generalization for pre-trained model ranking. It minimizes dependency on the potency of $\psi$**. Model Spider learns task tokens from $\psi$, reflecting task importance relative to pre-trained models, **not an absolute need for exceptional task representation**. In Appendix Figure 2 and line 194, we introduce an attenuated version of $\psi$ in a tiny scale and conduct ablation studies(green vs. pink in the figure).
Moreover, when the downstream task significantly differs from the tasks during training, Model Spider maintains robust performance.
+ **New Field Tasks in Natural Language Processing**
Transitioning from visual recognition to NLP, **we employ rule-based task categorization to create NLP sub-zoos**, refining model ranking. We demonstrate the ranking capabilities in NLP and large language model contexts in Tables 2 and 3 of the general response. Model Spider continues to showcase exceptional applicability and generalization.
+ **New Domain Tasks with Out-of-Distribution**
For downstream tasks beyond training distribution, the general feature extractor $\psi$ might have limited representation capacity for that particular task. To illustrate this, we conduct experiments that analyze $\psi$'s performance on certain datasets. We extract features using $\psi$ and compare the distances to training class centers, akin to Nearest Class Mean (NCM) [1]. In the following, we compare the accuracy (in %) differences between full-parameter fine-tuning and NCM on various datasets. Larger differences indicate more disparities in downstream tasks. Notably, datasets like DTD and Pet, **which deviate notably from the pre-training data distribution, highlight $\psi$'s limited representation capacity in such cases**.
| Method | CIFAR-100 | Caltech-101 | DTD | Pet | EuroSAT |
|-------|-------|-------|-------|-------|-------|
| Fully Fine-tuning | 69.66 | 85.62 | 63.39 | 84.35 | 93.62 |
| Feature-based NCM | 66.41 | 85.80 | 57.19 | 78.64 | 89.11 |
As shown in the Table 1 of the paper, we observe that Model Spider still demonstrates exceptional performance on DTD and Pets datasets.
Furthermore, **we extend our evaluation to the EuroSAT remote sensing dataset**, demonstrating Model Spider's effective generalization ability.
| Method | EuroSAT |
|-------|-------|
| LEEP | 0.395 |
| LogME | 0.510 |
| Model Spider | **0.682** |
+ **New Type Tasks from Regression**
As shown in Table 2 of the paper (line 277), Model Spider excels in regression tasks, capturing relative model proficiency order.
The $\psi$ captures **the relative order of task representation rather than absolute task-specific information**. It's most effective when a semantic connection exists between models and tasks, enhancing model ranking. Furthermore, for the diagram spanning pre-training to fine-tuning, the pre-training knowledge is maximally stimulated when downstream tasks are semantically related to models. Otherwise, the fine-tuned performance of unrelated pre-trained models resembles that of randomly initialized models.
**Question 2**: Task representation constrained by class centers.
**Answer 2**: In our method, task tokens encapsulate task representations, with various implementation options. The class center is just one specific to classification. **Even without categorical information in the target tasks, meaningful task representation can be established by forming semantically relevant prototype clusters.** For instance, in regression tasks (e.g., dSprites, UTKFace in Table 2), Model Spider constructs semantic clusters representing age and position, yielding favorable results.
We explore alternative task representation. In DTD dataset of Figure 3, we sample around 235 samples and compute the covariance of these samples. The results are as follows:
| Method | DTD |
|-------|-------|
| Prototype Token | **0.549** |
| Covariance Token | 0.513 |
In conclusion, Model Spider is capable of learning from diverse task representations.
**Question 3**: Generability on larger pre-trained model zoo or to new tasks.
**Answer 3**: In Figure 3 of the paper, we extend the model zoo from Table 1 (consisting of 10 pre-trained models, following LogME) **to 42 pre-trained models. This extended zoo covers three similar magnitude architectures**: Inception V3, ResNet 50, and DenseNet 201, each pre-trained on 14 datasets (lines 288-292).
Furthermore, NLP and large language models are evaluated (Tables 2 and 3 of the general response), see Answer 1 for more details. Model Spider consistently demonstrates exceptional performance.
**Question 4**: The motivation for Equation (4). Comparison to the pairwise BPR and listwise ListMLE.
**Answer 4**: Building on the cross entropy loss which aims to boost the position of the one-hot label, we craft a multi-round optimization where, in round $m$, we elevate the $m^{th}$ largest item above items from $m + 1$ to $M$. We pick the $m^{\text{th}}$ item using the $\mathrm{dsc}(\cdot)$ operator. Also, the denominator summation only includes items from $m$ to $M$ (not all). These nuances distinguish our ranking loss from ListMLE. **Our optimization moves from local to global ranking, capturing richer contextual ordering**. Unlike pairwise BPR which only compares items, our loss considers global ranking, reflecting the entire distribution. We include ListMLE results in Table 1 of the Appendix.
**Thank Reviewer mKeC very much for the valuable suggestions. We will incorporate the relevant content into the final version**.
[1] Distance-Based Image Classification: Generalizing to New Classes at Near-Zero Cost.
---
Rebuttal 2:
Comment: I appreciate the authors' efforts during the rebuttal phase. I have carefully read the reviews from other reviewers and the authors' corresponding responses. I thank the authors for the detailed answers to my review, which have resolved my primary concerns. I'd like to raise my rating to borderline accept regarding the novelty of this work.
---
Rebuttal Comment 2.1:
Comment: We are sincerely grateful for the thoughtful revisions by Reviewer mKeC. We will persist in our efforts moving forward. Thank you very much. | Summary: This paper introduces Model Spider, a unique method to efficiently and accurately rank Pre-Trained Models (PTMs) for a specific task within a model zoo. Model Spider innovatively creates tokens for both PTMs and tasks, encapsulating their characteristics in a manner that facilitates an efficient selection process. It utilizes a separate set of training tasks to learn how to construct these tokens and calculate the fitness score between a model-task pair. The paper also presents a strategy to update the task tokens based on the semantics specific to the top-ranked PTM candidates, improving the final selection. The key contributions of this paper are the innovative method of tokenizing tasks and PTMs for easy ranking, and the ability of the system to incorporate task-specific forward results of certain PTMs within resource limitations. Through rigorous testing across various model zoo configurations, the authors demonstrate the efficacy of Model Spider, showing significant improvements in PTM selection and efficiency. The work represents an innovative solution to the challenge of sifting through the proliferating number of PTMs to find the most suitable model for a given task.
Strengths: 1. Proposed methods show quite significant improvements over the strong baselines.
2. Methodology of tokenizing PTM looks new. PTMs are tokenized by mapping them to unsupervised trained task tokens. The method looks sensible.
3. Applying learning to rank to rank the model fitness is also new and interesting.
4. Extensive experiments with ablation studies.
Weaknesses: 1. “Hyperparameter k” in Figure 1 is not explained at all. In the main text, notation k is also not clearly defined, only top-k mentioned here and there. Readers have to guess k means the number of top ranked Pretrained Models.
2. Some important results and discussions are included in the Appendix, but no reference in the main text. Please refer readers to the appendix for the useful information.
3. Figure 3 is a bit hard to read. What information should I draw from the figure, besides correlation number?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How good can the method be applied to the pre-trained large language models?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. Good tokenization of pre-trained models (PTMs) depends highly on the number of high-quality PTMs. How the Model spider perform when the number of PTMs M and the number of tasks varies?
2. The training of Model Spider depends on the RankAgg. As authors explained (in the appendix), RankAgg introduces a significant computational burden. How could one get a RankAgg model to be used for training Model Spider in the first place?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We deeply appreciate Reviewer 1u8a's insightful queries and constructive input**. The engagement has undoubtedly enhanced the quality and coherence of our paper. Here are our responses:
**Question 1**: Hyperparameter $k$ in Figure 1.
**Answer 1**: The hyperparameter $k$ corresponds to **the number of top-ranked pre-trained models (lines 51, 249, and 319)**. In Section 4.4 "Re-ranking with Efficiency-Accuracy Trade-off," we introduce a two-step process in Model Spider. Initially, a general task token is used for the initial ranking of pre-trained models. Then, the top-$k$ **pre-trained models perform forward passes on the target task**. This generates PTM-specific task tokens, aiming to enhance our ranking outcomes. For instance, with a smaller value of $k$, Model Spider exhibits high efficiency in model ranking, confirmed by the results in Appendix Table 4: "Comparison of the time consumption and memory footprint." Moreover, in lines 319-323 of the main text and Figure 4, we demonstrate that as $k$ increases, the overall ranking accuracy also improves.
**Question 2**: Refer readers to the Appendix for useful information.
**Answer 2**: Thank Reviewer 1u8a for the valuable reminder. We will include relevant references in the final version, such as comparing **the time consumption and memory footprint of various $k$, other related ablation studies**, and so on.
**Question 3**: The information in Figure 3, besides the correlation number.
**Answer 3**: In Figure 3, we can observe **the direct linear relationship between the predicted model ranking and the downstream fine-tuned accuracy**. The x-axis represents the fine-tuned accuracy of the corresponding pre-trained models, and the y-axis reflects the scores predicted by the model ranking method. Higher scores imply a higher chance of superior performance. For instance, the better result, like Ours with a score of 0.678, on the Pet dataset compared to LEEP with a score of 0.361 exhibits the fitting line that closely approximates a slope of $1$. **The values of the points along the y-axis offer insights into the distribution of the method's evaluated scores**. Different colors of points correspond to different model structures. **Multiple points of the same color stem from diverse tasks during pre-training**. For example, ResNet-50 models mostly perform well on the Aircraft dataset, whereas this trend is not necessarily consistent on the Pet dataset.
**Question 4**: Application to the pre-trained large language models.
**Answer 4**: We explore the performance of Model Spider in NLP tasks and large language models. **The results are provided in Tables 2 and 3 of the general response**. We sincerely appreciate the forward-looking suggestions from the reviewers, and **we will certainly incorporate the relevant experiments in the final version of the paper** to achieve even greater advancements in the future.
**Question 5**: How the Model Spider perform when the number of PTMs $M$ and the number of tasks varies.
**Answer 5**: In Table 2 of the Appendix, we conducted ablation studies on the performance of Model Spider as the size of the pre-trained model zoo dynamically increases. It is observed that changes in the number of models within the pre-trained model zoo can indeed influence the performance of Model Spider, with **greater model diversity posing increased challenges for the model ranking task**.
In Table 1 of the general response, we further **expanded the model repository by including larger models like ViT-B/16**. This extended coverage enhances the diversity of the model zoo. Model Spider continues to demonstrate consistently high-performance levels in this enriched setting.
Additionally, concerning the scope of training tasks, we downsized the training task set for training Model Spider. **The results of this reduction are presented in the Appendix at line 194 and Figure 2 (yellow compared to pink)**. Our results show that, apart from a slight drop in performance on the DTD dataset, **Model Spider maintains strong overall performance even with fewer training tasks**. This underscores Model Spider's ability to effectively capture diverse pre-trained model characteristics despite reduced training task diversity. These findings highlight the robust performance of Model Spider across different numbers of pre-trained models and tasks.
**Question 6**: RankAgg introduces a significant computational burden. How could one get a RankAgg model to be used for training Model Spider?
**Answer 6**: As mentioned in lines 308 of the main text and 48 of the Appendix, RankAgg requires pre-computation and entails some overhead. However, **it is significantly more efficient compared to full parameter fine-tuning.**
In the experiment detailed in **line 69 of the Appendix and depicted in Appendix Figure 3**, we elaborate on the motivation behind introducing RankAgg. We empirically observe that popular approaches such as NCE, LEEP, and LogME exhibit "good but diverse" pre-trained model ranking orders. Consequently, ensembling their ranking outcomes into a stronger single order appears to be an intuitive way to enhance transferability estimation quality.
The algorithmic process of RankAgg is **outlined in Algorithm 1 (line 16) of the Appendix, as well as in lines 73 and 83**. It involves initially sampling a subset of training tasks containing partial samples, calculating the model rankings derived from NCE, LEEP, LogME, and H-Score methods, and then **aggregating these rankings using RankAgg to obtain improved results**. The RankAgg method is designed to be plug-and-play, demonstrating excellent scalability and ease of use.
**We sincerely thank the Reviewer 1u8a for the valuable insights**. We are fully committed to incorporating these elucidating descriptions into the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for replying my questions. I would keep voting for borderline accept.
---
Reply to Comment 1.1.1:
Comment: We genuinely thank Reviewer 1u8a for the valuable support. We will continue to make revisions accordingly. Thank you very much. | Summary: This paper introduces a very interesting approach named "model spider", to address the challenging problem of selecting suitable Pre-Trained Models (PTMs) from a large number of options to fit the target tasks. Instead of relying on time-consuming and computationally heavy forward or backward passes over all PTMs, the model spider tokenizes both PTMs and tasks, summarizing their characteristics into vectors for efficient PTM selection. Experiments show that model spider performs well in various configurations of model zoos, providing a balance between efficiency and selection accuracy.
Strengths: 1. The problem and idea of this paper are quite interesting.
2. The proposed method is simple yet effective.
3. The results are quite encouraging.
Weaknesses: 1. The results are tested on vision tasks. It is not clear whether the proposed method can be generalized to tasks in other modalities.
2. The model size used in this work is relatively small (only up to tens of millions of parameters). It is not clear whether the proposed method can handle larger models, such as ViT, BERT, and GPT. It would be more exciting if larger models (especially large language models) can be easily evaluated.
3. The proposed method still requires some samples to train on new tasks. It would be better to consider using some methods (e.g., meta-learning) to generate initial tokens for tasks so that new tasks can be handled without additional training samples.
Minor:
The term "token" is a little confusing since the meaning of token in this paper is different from the common meaning of tokens in PTMs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We sincerely appreciate Reviewer iJup's perceptive suggestions and valuable feedback**. The suggestions have been instrumental in enhancing our paper. For some questions, our responses are as follows:
**Question 1**: Whether the proposed method can be generalized to tasks in other modalities.
**Answer 1**: Thank Reviewer iJup for the valuable suggestions from Reviewer iJup. To assess the performance of Model Spider in other modalities, such as natural language processing, we followed the approach outlined in LogME. **We introduced cased BERT-D, uncased BERT-D, RoBERTa, and RoBERTa-D as pre-trained models to evaluate the performance of Model Spider**. We conducted our evaluation on the MRPC and SST-2 downstream datasets. **The comprehensive results are presented in Table 2 of the general response, highlighting Model Spider's consistent and robust generalization capabilities**. We will include these additional experimental results in the final version for further clarification.
Additionally, we have showcased the generalization results on large language models in Table 3 of the general response. These results further highlight that Model Spider maintains strong scalability and exceptional performance across diverse tasks.
**Question 2**: Whether the proposed method can handle larger models, such as ViT, BERT, and GPT. It would be more exciting if larger models (especially large language models) could be easily evaluated.
**Answer 2**: Thank Reviewer iJup for the forward-looking feedback. We have indeed addressed the suggestions comprehensively. As described in Answer 1, **we incorporated the ViT-B/16 [4] into our existing model zoo**, covering larger models. Moreover, in alignment with the inquiry about NLP pre-trained models, **we evaluated the model ranking capabilities on the BERT series**. These results are presented in Table 1 of the general response. Additionally, in Table 3 of the general response, **we showcased Model Spider's performance on GPT-type large language models**, reaffirming its remarkable extensibility and outstanding performance.
| Method | Aircraft | Caltech101 | Cars | CIFAR10 | CIFAR100 | DTD | Pets | SUN397 | Mean |
|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------|
| NCE | 0.523 | **0.681** | **0.790** | 0.701 | 0.659 | 0.305 | 0.681 | 0.762 | 0.638 |
| LEEP | 0.318 | 0.107 | 0.682 | 0.591 | 0.660 | 0.114 | 0.514 | 0.486 | 0.434 |
| Model Spider | **0.693** | 0.679 | 0.781 | **0.879** | **0.955** | **0.699** | **0.812** | **0.869** | **0.796** |
**Table 1**: Comparison of NCE, LEEP, and Model Spider performance after extending the pre-trained model zoo in the original Table 1, including the addition of the ViT-B/16 pre-trained model. The remaining experimental setup is consistent with Table 1 in the paper.
| Method | MRPC | SST-2 |
|-------|-------|-------|
| LogME | 0.493 | 1.000 |
| Model Spider | **0.654** | 1.000 |
**Table 2**: Ranking performance of pre-trained model ranking on NLP tasks.
| Method | Operating System | Computer Architecture | College Physics | College Chemistry | Electrical Engineer | Metrology Engineer | Advanced Mathematics | Probability and Statistics | Modern Chinese History | Legal Professional |
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| Random | 0.083 | -0.237 | -0.185 | 0.021 | -0.532 | 0.181 | -0.010 | 0.013 | -0.005 | -0.605 |
| ChatGPT-Top1 | 0.136 | 0.052 | 0.024 | -0.162 | 0.232 | -0.061 | -0.198 | 0.172 | 0.105 | 0.305 |
| Model Spider | **0.720** | **0.682** | **0.311** | **0.686** | **0.308** | **0.682** | **0.184** | **0.243** | **0.891** | **0.737** |
**Table 3**: Ranking performance of pre-trained model ranking on GPT-type LLMs, measured by weighted $\tau_w$. The horizontal axis represents evaluation datasets from C-Eval benchmark.
**For more details, please see the general response.**
**Question 3**: It would be better to consider using some methods (e.g., meta-learning) to generate initial task tokens so that new tasks can be handled without additional training samples.
**Answer 3**: Thank Reviewer iJup for the sincere reply. Our approach primarily revolves around learning to rank, wherein the model's efficient ranking capability is acquired by learning from its historical performance across a diverse range of tasks. **Our learning process also shares connections with the Few-Shot Learning (FSL) scenario and metric-based approach of Meta-Learning**.
Model Spider starts its journey from the performance achieved through fine-tuning established models. It then advances its capabilities through training on additional tasks, **subsequently enabling the generalization to tasks unseen before**. This is exemplified in **the cross-task setting depicted in line 267, Table 1, and Figure 3 of the paper**. Model Spider's ability to sustain its ranking across tasks of varying types and domains is notable. This capability extends **beyond the generalization power limited to a singular task type and aligns with the notion of FSL within the meta-learning framework**. This involves acquiring recognition abilities from specific subsets of categories in FSL and extending these abilities to categories that have not been encountered previously. We intend to incorporate further relevant descriptions of this aspect in the final version of our work.
**Question 4**: The term "token" is a little confusing since the meaning of token in this paper differs from the common meaning of tokens in PTMs.
**Answer 4**: Thank Reviewer iJup for the valuable feedback. We appreciate the suggestion, and in the final version, **we will use the term "model representation" or a more appropriate phrase instead of "token" to avoid confusion and ensure clarity**.
Once again, **we are grateful for Reviewer iJup's time and effort in reviewing our paper**. Thank Reviewer iJup for the continued support and engagement in advancing the field. | Summary: This paper proposes a method to select the "best" pre-trained model for a given task. This problem is important given the large number of available pre-trained models. The key behind Spider relies on tokenizing both the models and the tasks by summarizing their characteristics into vectors. More specifically, the authors use a general encoder and measure the similarity between tokens in a supervised manner: the ranking of models are obtained through some historical tasks. Normally, one would take a list of pre-trained model, (optionally) freeze the feature-extractor part, add a randomly-initialized head on top, fine-tune on the dataset, and then measure their transferability. This process is computationally intensive.
The proposed model Spider first randomly sample training tasks and assume that we can compute the transferability for M pre-trained models and thus, their ranking. Given this dataset, the model is trained to learn a similarity function to mimic this ranking. The only features that are used are task tokens and tokens from the models. A model token consists of a representation to reflect how good the pre-trained model is in general. A task token is an embedding that represent a class in a dataset. Finally, different re-ranking strategies are proposed with efficiency-accuracy trade-off.
Transferability assessment is not my domain. Nevertheless, the proposed method seems sound to me. The experiment section is really complete and includes many datasets and baselines for single-source and multi-source model zoo. Finally, the ablation study emphasizes the importance of RankAgg.
Overall, the paper is well written, the approach seems novel, and again, given that this topic is not my domain, I don't see any reason to reject this paper.
Strengths: - Efficient model selection method for a given task
- The method is novel
- Good performance in the experiment section
Weaknesses: - Some concrete examples of Task & Model tokens would be appreciated
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I have no question
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: There is not a limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We sincerely value the insightful suggestions** provided by Reviewer PpFQ. **Within our General Response PDF file**, we have provided **concrete examples of model-task tokens**, demonstrating with pre-trained models like Food, SUN397, Caltech101, and Dogs datasets. This presentation effectively underscores the semantic relationships between various models and tasks. We are fully committed to meticulous refining and seamlessly **incorporating these details into the final version**.
Once again, we extend our heartfelt and profound gratitude to Reviewer PpFQ. **Thank you very much**.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I am satisfied with the answer.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate Reviewer PpFQ for dedicating valuable time to offer constructive insights. Thank you very much. | Rebuttal 1:
Rebuttal: **Dear Reviewers:**
We would like to express our sincere gratitude to Reviewers Gb1N, PpFQ, iJup, 1u8a, and mKeC for **their insightful reviews of our submission**. We are heartened by the constructive feedback and valuable suggestions each reviewer provides. We acknowledge that **all reviewers** have highlighted the motivating factors behind our proposed method, Model Spider, emphasizing its **novelty and interest**. They have also recognized **our comprehensive and detailed evaluation**, as well as the **clarity and organization of our paper's presentation**.
We are pleased by the positive feedback from Reviewers Gb1N, PpFQ, iJup, and 1u8a regarding Model Spider's **outstanding performance**. We understand the concern raised by Reviewer mKeC about **future task generalization** and the questions posed by Reviewers mKeC, iJup, and 1u8a about the applicability, such as ranking on tasks with new types, modalities, or on larger pre-trained vision or language models. **To address these, we conducted extensive experiments to show Model Spider's robustness across various scenarios**.
Moving forward, we will present our supplementary experiments.
+ **For Pre-trained Larger Vision Models**
For ranking on larger vision pre-trained models, we have incorporated **the ViT-B/16 model into our existing pre-trained model zoo**. This medium-to-large-scale vision model with approximately 100 million parameters was introduced by fine-tuning only the last linear layer (linear probing). We then **compared the performance of NCE, LEEP, and Model Spider** on this extended pre-trained model zoo.
| Method | Aircraft | Caltech101 | Cars | CIFAR10 | CIFAR100 | DTD | Pets | SUN397 | Mean |
|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------|
| NCE | 0.523 | **0.681** | **0.790** | 0.701 | 0.659 | 0.305 | 0.681 | 0.762 | 0.638 |
| LEEP | 0.318 | 0.107 | 0.682 | 0.591 | 0.660 | 0.114 | 0.514 | 0.486 | 0.434 |
| Model Spider | **0.693** | 0.679 | 0.781 | **0.879** | **0.955** | **0.699** | **0.812** | **0.869** | **0.796** |
**Table 1**: Comparison of NCE, LEEP, and Model Spider performance after extending the pre-trained model zoo in the original Table 1, including the addition of the ViT-B/16 pre-trained model. The remaining experimental setup is consistent with Table 1 in the paper.
+ **For Pre-trained Models of NLP**
To assess the adaptability of Model Spider to tasks in other modalities, such as Natural Language Processing (NLP). **We introduced cased BERT-D, uncased BERT-D, RoBERTa, and RoBERTa-D as pre-trained models** to evaluate the performance of Model Spider. Our evaluation included the **MRPC** [1] and **SST-2** [2] downstream datasets. MRPC consists of sentence pairs extracted from online news sources, where the task involves determining the semantic equivalence of sentences within each pair. On the other hand, SST-2 comprises sentences from movie reviews. The task is focused on predicting the sentiment (positive/negative) of a given sentence. **This model ranking task in the field of NLP follows the approach established by LogME** [3]. We got the following results:
| Method | MRPC | SST-2 |
|-------|-------|-------|
| LogME | 0.493 | 1.000 |
| Model Spider | **0.654** | 1.000 |
**Table 2**: Ranking performance of pre-trained model ranking on NLP tasks.
+ **For Pre-trained Larger Language Models**
Furthermore, we perform pre-trained model ranking for Large Language Models (LLM) of GPT-type, which includes ChatGPT, ChatGLM2-6B, Qwen-7B, Baichuan-7B, MOSS, bloomz-mt-176B, and Chinese Alpaca-13B. This involves **introducing the C-Eval Benchmark** [4] and employing Model Spider to rank these models' performance **across 10 sub-evaluation datasets encompassing domains** as science, technology, engineering, mathematics, and humanities. **We utilize embeddings available at** https://huggingface.co/GanymedeNil/text2vec-large-chinese **to encode the downstream tasks**. We leverage historical performance from the other datasets provided by C-Eval as the training set for Model Spider, with the objective of learning model and task representations (tokens). As the output access interface of LLMs is limited (for instance, obtaining features from ChatGPT is challenging), common methods fail in such scenarios. We compare our model ranking approach **against these two setups**: a random ranking, and ranking ChatGPT as the top model with others in random order. The obtained results are as follows:
| Method | Operating System | Computer Architecture | College Physics | College Chemistry | Electrical Engineer | Metrology Engineer | Advanced Mathematics | Probability and Statistics | Modern Chinese History | Legal Professional |
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| Random | 0.083 | -0.237 | -0.185 | 0.021 | -0.532 | 0.181 | -0.010 | 0.013 | -0.005 | -0.605 |
| ChatGPT-Top1 | 0.136 | 0.052 | 0.024 | -0.162 | 0.232 | -0.061 | -0.198 | 0.172 | 0.105 | 0.305 |
| Model Spider | **0.720** | **0.682** | **0.311** | **0.686** | **0.308** | **0.682** | **0.184** | **0.243** | **0.891** | **0.737** |
**Table 3**: Ranking performance of pre-trained model ranking on GPT-type LLMs, measured by weighted $\tau_w$. The horizontal axis represents evaluation datasets from C-Eval benchmark.
**Thank you for your consideration and support.** We are committed to addressing the reviewers' feedback and further improving our work based on the insightful comments. **We once again extend our appreciation to the reviewers for their invaluable contributions**.
[1] Automatically constructing a corpus of sentential paraphrases.
[2] Recursive deep models for semantic compositionality over a sentiment treebank.
[3] Logme: Practical assessment of pre-trained models for transfer learning.
[4] C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models, https://cevalbenchmark.com/static/leaderboard.html
Pdf: /pdf/134c9e7d8f2df1bf8a13b662e32300287f680376.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a method called MODEL SPIDER for selecting the most suitable pre-trained models (PTMs) for a given downstream task. The proposed method aims to maintain a balance between efficiency and accuracy in the selection of PTMs. To achieve this, the authors tokenize all PTMs and tasks into vector representations that capture their general properties and their relationship with each other. During the training process, they dynamically select a partial set of PTMs and incorporate the specific tokens into the sampled tasks. During deployment, the authors employ a coarse-grained PTM search to narrow down the candidate PTMs and then fine-tune the selected PTMs for downstream use.
The proposed approach is evaluated on several benchmark datasets, and the results demonstrate that it outperforms existing PTM selection methods in terms of efficiency and accuracy. As part of the analysis of the effectiveness of the proposed method, the authors conduct ablation studies. They find that incorporating PTM-specific features and prompts improves the performance of the proposed method significantly.
Strengths: * This paper presents a clear and well-motivated problem statement in the Introduction: how to select the best pre-trained model (PTM) for a given task. The approach section introduces the necessary preliminary and explains almost parts of the proposed method in detail.
* The method is novel and interesting, as it constructs tokens for both the PTMs and the target task, and then measures their similarity to find the optimal match. This way, the method can leverage the rich information encoded in the PTMs and adapt it to different tasks.
* The evaluation is comprehensive and detailed, covering 10 PTMs from five architectures and 9 downstream datasets for classification and regression tasks. The paper also compares the method to several strong baselines and shows that it outperforms them in selecting the most suitable PTM for each task
Weaknesses: * The fitness function is a neural network that maps the PTM and task tokens to a scalar score, but the paper does not specify how this score is calculated or which threshold is compared with.
* The paper lacks details on the design and training of the task encoder, which is a crucial component of the method. In line 42-43, the authors mentioned that they use a Transformer module and refer to the reference [72] "Attention is all you need". In my opinion, this is an important detail to explain how the tokenization process is performed.
* The paper does not explain how the authors handle noise and irrelevant data in the task tokenization process."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do we need to represent all data instances, or can we use a sampling strategy to select the most representative examples from the data for encoding?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not include a limitation section that explicitly discusses the drawbacks or challenges of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank Reviewer Gb1N for the insightful review** and for recognizing the strengths of our paper and the Model Spider method for pre-trained model selection. **We're grateful for Reviewer Gb1N's recognition of our novel approach**, which leverages tokens representing both pre-trained models and tasks to measure their similarity.
**We responded to reviewer Gb1N's inquiries as follows:**
**Question 1**: How this score $\hat{\mathrm{t}}_{\phi_m \rightarrow \mathcal{T}}$ is calculated and dose it need a threshold.
**Answer 1**: Thank Reviewer Gb1N for the insightful feedback. **We calculate the model-task score using Equation 6, as outlined in lines 188-190.** Specifically, we employ **a transformer-based module** to compute the similarity between model and task tokens. This is achieved through a one-layer Transformer, a self-attention mechanism that accommodates various inputs, involving multi-head self-attention, multi-layer perceptron, and layer norm blocks in alternating layers. **The input to the Transformer consists of a union set of model and task tokens** denoted as
$\boldsymbol{z}=\left[\boldsymbol{\theta}_{\boldsymbol{m}}, \boldsymbol{\mu}(\mathcal{T})\right] \in \mathbb{R}^{d \times(1+C)}$,
leading to the similarity score $\hat{\mathrm{t}}_{\phi_m \rightarrow \mathcal{T}}$ computed as:
$\operatorname{sim}\left(\boldsymbol{\theta}_m, \boldsymbol{\mu}(\mathcal{T})\right)=\mathrm{FC}(\operatorname{transformer}(\boldsymbol{z})[0])$.
For downstream tasks, we calculate scores for multiple pre-trained models, creating a ranking based on these scores. **Our focus is on the ranking order, where higher scores correspond to better downstream performance.** (lines 118-121). The calculation process **does not involve a threshold** for comparison. **The weighted** $\tau_w$ metric evaluates the quality of rankings **based on relative order rather than absolute scores**. The $\tau_w$ considers differences in ranking positions and is not concerned with specific score. **We will provide more details in the final version**.
**Question 2**: What are the construction details of the task encoder.
**Answer 2**: Thank Reviewer Gb1N for the perceptive feedback. The method's task encoder is indeed a critical component. **The task representation (tokens) is derived using an additional self-supervised training tokenizer, denoted as** $\psi$, **with relevant explanations in the Task Token section (lines 177-179, 272).** This encoder is an additional frozen unit **with the same parameter magnitudes as the pre-trained models to be ranked**.
We detail in line 272 (or in Appendix line 35) that $\psi$ **is realized through a pre-trained Swin-B-based EsViT** [1,2] (accessible at https://github.com/microsoft/esvit), trained on ImageNet-1K using self-supervised learning. In our experiments, this encoder functions as a feature extractor. **Additionally, the Transformer-based module (in lines 42-43) details for assessing model-task similarity in Model Spider** are expounded upon in lines 186-188 and response to the previous question. We will further augment this in the final version.
[1] Efficient self-supervised vision transformers for representation learning.
[2] Swin transformer: Hierarchical vision transformer using shifted windows.
**Question 3**: How to handle noise and irrelevant data in the task tokenization process.
**Answer 3**: Thank Reviewer Gb1N for the thoughtful inquiry. **While our task tokenizer might include noise and irrelevant data, Model Spider effectively mitigates noise effects on task representations.** We sample tasks from a mixed dataset during training (lines 273, 293, and Appendix line 149), enhancing diversity. **Model Spider adeptly captures task information,** utilizing numerous training tasks (Appendix line 153) and random sampling to ensure task variability, minimizing noise impact.
Moreover, in the experiments presented **in the latter part of Table 1**, we assess the performance of pre-trained model ranking with employing a few-shot manner (10 examples per class), and each result is repeated 30 times for evaluation. **In few-shot tasks, where the sample size is limited, noise and irrelevant data have a more pronounced impact on task representations (tokens).** Nonetheless, Model Spider demonstrates stable and superior performance even under these conditions. We appreciate Reviewer Gb1N's consideration of these aspects within our methodology.
**Question 4**: Do we need to represent all data instances, or use a sampling strategy for encoding.
**Answer 4**: Thank Reviewer Gb1N for the precise question. In Model Spider, **we employ random sampling** to create task tokens by selecting 50 instances per class, elaborated in Appendix, line 153. **Our method does not necessitate representing every instance or adopting a sampling strategy to choose the most representative examples.** Model Spider's robustness enables effective learning from randomly sampled task representations (tokens). Our approach of random task sampling is detailed in Appendix lines 153 and 187: single-source experiment (Table 1) uses around 1k tasks, and multi-source (Figure 3) employs about 4k tasks. Ample sampling is employed to mitigate the influence of randomness during training and testing to the fullest extent possible.
**Question 5**: The paper does not include a limitation section that explicitly discusses the drawbacks or challenges of the proposed approach.
**Answer 5**: Thank Reviewer Gb1N. **Our Discussions and Limitations section can be found on line 252 of the appendix.** We will ensure its placement is appropriately adjusted in the final version.
**We sincerely thank Reviewer Gb1N for their insightful review and valuable questions**. We are committed to addressing these queries thoroughly in the final version.
---
Rebuttal Comment 1.1:
Comment: I appreciate your efforts to address my comments and concerns. You have provided sufficient information and clarification in this rebuttal. I increased my overall rating for this paper from Borderline Accept to Accept.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate Reviewer Gb1N's insights and feedback. We remain committed to further improvement. Thank you very much. | null | null | null | null | null | null |
Learning Dictionary for Visual Attention | Accept (poster) | Summary: This paper presents a new architecture that can be applied for various tasks, including image classification, point cloud classification and image segmentation. The key idea is to leverage a learnable dictionary module to replace the attention model in the transformer architecture. The model is able to achieve compelling results on multiple tasks with the low computational cost, including the fewer parameters, less GPU training time. Experimental results are conducted on multiple vision tasks and achieve compelling results.
Strengths: ### improved results over baseline models
- Based on the quantitative results shown in Tables 2, 3 and 4, the proposed model outperforms the existing models on all tasks.
Weaknesses: ### (Major) Confusion on method
- It took me a long time to actually get the main message of this paper, which is hidden in a bunch of overwhelming technical details and verbose writing.
- Most importantly, the dictionary learning directly learn the sparse representation for the attention $A$. However, this is just a sparse representation, and I cannot get the attention similarity between tokens of different grid features.
- The equation $D=[d_1^c,d_2^c,\cdots,d_k^c]=[d_1^r, d_2^r,\cdots,d_s^r]^T\in R^{k\times s}$ and $\Phi=[\phi_1^c,\phi_2^c,\cdots,\phi_k^c]=[\psi_1^r, \psi_2^r,\cdots,\psi_s^r]^T\in R^{n\times s}$ are confused, which is different to the dictionary $D\in R^{n\times s}$and sparse representation $\Phi\in R^{k\times s}$ defined before.
- The equation is also confused. $x_{1*}$ denotes one grid feature with different dimension value, but every channel is represented by different code $\phi$, which is hard to understand for the embedding.
- The key optimisation function is also unclearly. The dimension of $\mathbf{D}$ and $\mathbf{\Phi}$
is different. How could they calculate the reconstruction in equation (6)?
### (Major) More results should have been expected
- I expected to see more segmentation results, but only the compared attention map is provided in Figure 3, and no results in supplemental material either.
- A quantitative comparison is expected for the image segmentation task.
- It would be helpful to provide some visual examples of the segmentation map in the main paper.
### (Minor) Presentation
- For figure 2, the paper claimed "the deeper the better", but it is not true from the figure visual results. It would be stronger to provide the accuracy analysis for the results.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Apart from a few questions that need clarifications listed above, the only additional results I might suggest is to also perform inference time reported as in some related works for a slightly fairer comparison.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: There are no discussion or broader societal imparts discussed in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank for your valuable suggestions on this paper. The following responses might help address your questions and concerns about this paper:
#### 1. **Weakness 1. (Major) Confusion on method...**
**Q(1)** "...The key motivation seem..."
**Answer to Q(1):** Thank you for your careful review and feedback. In the **revised version**, we further improve the clarity and organization of the paper to ensure that the main message is more easily accessible to readers.
The primary focus of our paper is on introducing the concept of dictionary learning-based visual attention module and showcasing its potential for capturing visual attention. The attention mechanism is one of the key structures of the very popular high-performance transformer network. Recalling that in human visual perception, a large number of studies have shown (some references are cited in the Introduction and Section 3.3.2) the primary visual perception cortical signal encoding method and Dictionary learning is similar, which is also closely related to human visual attention. Inspired by this, we attempt to propose a dictionary learning-based visual attention (Dic-Attn) module in machine vision, exploiting the potential nonlinear structural information disentanglement ability from dictionary learning, and exploring its intuitive way to exploit discriminative information and visual attention.
**Q(2)** "...I cannot get attention similarity between tokens of different grid features..."
**Answer to Q(2):**
The sparse representation serves as the coordinates of the dictionary. These sparse representations indicate the importance or relevance of each atom in the dictionary for reconstructing the input features. Importantly, the attention similarity between different tokens or grid features is inherently encoded within the dictionary and the sparse representation.
The diagonal transformed matrix $\mathbf{W_D}$ and transformed matrix $\mathbf{W_{\phi}}$ are specifically designed to re-weight the dictionary and transform its corresponding sparse representations, respectively. Moreover, $\mathbf{W_D}$ operates in a vector-wise manner, where each element in the diagonal serves as a weight for the dictionary columns, i.e., atoms. On the other hand, $\mathbf{W_{\phi}}$ implements the sparse encoding of individual elements.
These transformation parameters are driven by the task objective function and are updated through the backpropagation process.
**Q(3)** "The equations are confused..."
**Answer to Q(3):** We apologize for the confusion and appreciate your correction. In the **revised version**, we have corrected these typos:
$D = [d^c_1 d^c_2 \cdots d^c_k]= [d^r_1 d^r_2 \cdots d^r_n ]^T\in R^{n\times k},$
$\Phi = [\Phi_1 \Phi_2 \cdots \Phi_b]\in R^{b\times k\times s},$
$\Phi_i = [\phi_1 \phi_2 \cdots \phi_s]= [\psi_1 \psi_2 \cdots \psi_k ]^T\in R^{k \times s}.$
In our proposed method, we aim to capture the critical part of visual data by utilizing dictionary learning. According to experience, we build the dictionary as $n*k$, instead of building a dictionary at the fine-grained level of batch-level. The results also show that dictionary as a feature base is complete enough and discriminative. Hence, given the input $X\in R^{b\times n \times s}$, sparse representations $\Phi \in R^{b\times k\times s}$ will be seen as its sparse embedding.
**Q(4)** "The key optimization function... calculate the reconstruction in equation (6)"
**Answer to Q(4):** We appreciate for pointing out the unclear optimization function and the refactoring of the computation in Equation 6.
Firstly, we clarify how the optimization process works in Algorithm 1 and the key optimization function for dictionary learning. Due to space limitations, please refer to our response to Weakness 1 from Reviewer nmhX. Additionally, in the revised version, we provide a more detailed explanation of the key optimization function for dictionary learning.
After correcting Typo, we illustrate how could $D$ and $\Phi$ calculate the reconstruction in equation 6.
Therefore, let $X_i=[x_1, \cdots, x_s]\in R^{n\times s}$ denotes the input, $D\in R^{n\times k}$ represents the dictionary, $\phi$ represents the sparse coding vector, $\Phi_i = [\phi^c_1 \phi^c_2 \cdots \phi^c_k]\in R^{k\times s}$ . The non-linear scaling function NL(·) in equation 6 and equation 4 do not change the dimension of sparse representation $\Phi$. Therefore, it is able to calculate the reconstruction and output the attention map, i.e., $R^{n\times s} \rightarrow R^{n\times k} \times R^{k\times s} \rightarrow R^{n\times s}$.
#### 2. **Weakness 2 / Question. "(Major)More results should have been expected...; "...perform inference time reported..."**
**Answer.** Specifically, we add more experiments to verify our proposed module explicitly. We now place added experimental results in the rebuttal box at the top. The added results of inference time are also shown in the rebuttal box at the top. We appreciate your suggestion and hope that the revised version will better address your concerns.
#### 3. **Weakness 3. (Minor) Presentation...**"
**Answer.** Thank you for your careful review and feedback. Based on the results, it can be observed that as the depth increases, the accuracy initially improves, but after reaching a certain depth, the accuracy starts to decline. The phenomenon of rising first can be attributed to the fact that deeper networks have the capability to capture more intricate relationships between inputs and outputs. However, they are also prone to overfitting, particularly when the training dataset is relatively small, such as in the case of training from scratch using the CIFAR-10 dataset.
Therefore, it is important to carefully consider the trade-offs between depth and complexity when designing a neural network, rather than assuming that deeper is always better.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for providing these detailed responses. All my concerns have been addressed. The rebuttal provides a good analysis in terms of the proposed methods and more results. This new learnable dictionary module is meaningful for the community. Therefore, I raise my score for accept.
---
Reply to Comment 1.1.1:
Comment: We are glad that our response has addressed your concern and thank you for raising your score for our paper! The temporary rating we obtained is borderline accept rating (5) in the review system. We will carefully revise the manuscript and include the experiments to the final version. Appreciate again for your time and look forward to your considering raising your score to a higher accept rating. | Summary: This paper is about a new attention module called Dic-Attn, which is based on dictionary learning and sparse coding in the human visual perception system. The module can extract nonlinear structural information in visual data and reconstruct attention maps. The paper emphasizes the potential of leveraging sparsity in attention algorithms and addresses the efficiency and effectiveness of current attention methods. Authors also conduct experiments on image classification, point cloud classification.
Strengths: 1. The idea is novel.This paper proposes a novel attention module, Dic-Attn, which combines dictionary learning and attention mechanism to effectively explore the underlying nonlinear structure information and enhance the comprehensive representation of visual information.
2. The performance on point cloud classification is significant, which demonstrates the effectiveness of proposed method.
3. The writing is esay to follow and visualization is reasonable.
Weaknesses: 1. Experiments on CIFAR10 for image classification are not convincing. Please conduct experiments on ImageNet dataset to compare swin transformer. Besides, showing the result on ADE20K dataset.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Seeing weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Seeing weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful review and feedback on this paper.
The following responses might help address your questions about this paper:
**Question & Weakness .** "Experiments on CIFAR10 for image classification are not convincing. Please conduct experiments on ImageNet dataset to compare Swin transformer. Besides, showing the result on ADE20K dataset."
**Answer.** We highly appreciate your valuable advice. In the revision, we have illustrated the additional experimental results.
(1) We use "Swin-T (WSA)" and "Swin-T (Dic-Attn)" to denote the Swin Transformer-Tiny model and Swin Transformer-Tiny model using the windows Self-Attention (WSA) module and the proposed dictionary learning-based visual attention (Dic-Attn) module, respectively.
(2) Specifically, the added Accuracy (Acc), Mean Intersection over Union (mIOU) / Mean Accuracy (mAcc) results of **the classification task on ImageNet-1K** and **the part segmentation task on ADE20K** are shown as follows:
**Table 1.** Additional classification results on ImageNet-1K.
| Methods | Dataset | Resolution | Acc (50 Epochs) | Acc |
| ---------------- |:-----------:|:---------------:|:---------------:|:-----:|
| Swin-T (WSA) | ImageNet-1K | $224\times 224$ | 70.29 | 81.23 |
| Swin-T (Dic-Attn) | ImageNet-1K | $224\times 224$ | 71.66 | 81.29 |
Swin-T (WSA) and Swin-T (Dic-Attn) are trained from scratch on the ImageNet-1K dataset on four GeForce RTX 4090 GPUs for 300 epochs. The input images are all resized to $224^2$, and the batch size is 256. The training process for each epoch is expected to take approximately one to two hours. Our proposed Dic-Attn module brings up 1.37% gains in Top-1 accuracy compared with Swin-T(WSA) at the beginning of training (50 epochs).
#### **Table 2. Additional segmentation results on ADE20K.** Backbones are trained from scratch on ImageNet-22K. ViT-Adapter-L applies UperNet framework.
| Methods |Backbone| mIOU | mAcc |
| ---------------------- |:-----:|:-----:|:-----:|
| Segmenter-B |ViT(SA)| 49.20 | 59.32|
| ViT-Adapter-L |Beit(SA)| 56.80 | 69.56 |
| Segmenter-B |ViT(Dic-Attn)| 50.20 | 60.57 |
| ViT-Adapter-L |Beit(Dic-Attn)| 56.93 | 69.78 |
More experiments are still on evaluated, and the results will be published as soon as possible and added to the **revised Manuscript and updated Supplementary Material**.
---
Rebuttal Comment 1.1:
Comment: Thanks to the efforts in the rebuttal stage, I appreciate the author's experiments and response. I am willing to raise my score to 6 (weak accept). Please make sure that all experiments will be included to the final version.
Title: Response to rebuttal
---
Reply to Comment 1.1.1:
Comment: We are glad that our response has addressed your concern and thank you for raising your score for our paper! We will carefully revise the manuscript and include the experiments to the final version. | Summary: This paper proposed a novel dictionary learning-based attention (Dic-Attn) module, The proposed Dic-Attn module can be plug-and-play and stacked layer by layer to form a deep attention encoder. Extensive experimental results on computer vision tasks, e.g., image classification and point cloud classification, demonstrate the performance of the proposed method.
Strengths: This paper proposed a novel dictionary learning-based attention (Dic-Attn) module, The proposed Dic-Attn module can be plug-and-play and stacked layer by layer to form a deep attention encoder. This is an interesting idea, might be worth to keep on digging in.
Weaknesses: 1. Writing: This is not a well-organized paper. It is more like an application paper that pursues SOTA results rather than introducing an interpretable layer. If I understand it correctly, this paper does not focus on any analysis of the sparse dictionary learning but applies it to the transformer. If so, I would expect some insights, e.g the self-attention module could be regarded as sparse dictionary learning under some conditions and offer the proof. Then, the paper should squeeze the length of sections 3.1 and 3.2, and leave more space to discuss or explain the insights of the method.
2. Experiments: As mentioned above, this paper is an application of sparse dictionary learning in the transformer, so readers would expect some explainable experiment results or better performance than the baseline. Just experimenting on small datasets like CIFAR is not convincing when we know that the transformer was commonly used in large datasets like ImageNet. We know that sparse dictionary learning algorithms usually be applied in robustness or denoising tasks and could achieve good results. And the author should conduct some experiments that a transformer weapon with the proposed module should be significantly better than the baseline on robustness or denoising tasks.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: please refer to weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: please refer to weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful comments and constructive suggestions. The followings are detailed responses to your questions/concerns:
1. **Weakness 1.** Writing: This is not a well-organized paper. It is more like an application paper that pursues SOTA results rather than introducing an interpretable layer. If I understand it correctly, this paper does not focus on any analysis of sparse dictionary learning but applies it to the transformer. If so, I would expect some insights, e.g the self-attention module could be regarded as sparse dictionary learning under some conditions and offer the proof. Then, the paper should squeeze the length of sections 3.1 and 3.2, and leave more space to discuss or explain the insights of the method.
**Answer.** We appreciate your thoughtful comments on the organization and focus of our paper. In the **revised version**, we strike a better balance between presenting SOTA results and providing a clear explanation of the method.
(1) Section 3.1 discusses the summarized CV attention form, which is also the foundation of our proposed Dic-Attn module. Section 3.2 introduces the dictionary learning problem and derives the close solution of the sparse representation. With these two crucial parts at hand, we propose the concept of dictionary learning-based attention within the context of transformer models. We analyzed the process of data processing in the model and discussed the way the proposed module extracts attention. At the same time, due to the correlation between dictionary learning and biological visual perception, we reviewed some biological-related viewpoints as evidence for the discussion.
(2) While it is true that our paper primarily focuses on achieving state-of-the-art results, it is not solely an application paper. We also analysis the influencee of the encoder depth (model perspective), dictionary dimension, and the sparsity regularization coefficient (Dic-Attn module perspective). Further, we discuss the number of parameters and computational burden. Experimental results in image classification (CIFAR-10/100, ImageNet-1K), robust evaluation (CIFAR-10), real image denoise (SIDD), image segmentation (ADE20K) and Point Cloud Classification (ModelNet40, ScanObjectNN) showcasing the potential of dictionary learning-based visual attention module for capturing visual attention. The potential non-linear structural information disentanglement ability of dictionary learning can improve performance during segmentation, which has been validated in the comparison of attention maps within the ADE20K experiments.
(3) Regarding your suggestion to explore the self-attention module as sparse dictionary learning under certain conditions, in section 3.1 of our current paper, we provide a general attention form that also includes the self-attention mechanism. Although both have operations such as inner product, the operations inside the module are completely different, as can be seen by comparing Equation 1 and Equation 4. The most important step in obtaining attention maps for Dic-Attn is nonlinear decomposition and reconstruction, while the process of SA obtaining query, key, and value is a linear transformation. However, we agree that such analysis can provide valuable insights, and we will consider incorporating this aspect into future research or expanding our work to explore the connection between self-attention and sparse dictionary learning.
Thank you once again for your valuable feedback and constructive suggestions!
 
2. **Weakness 2**. Experiments: "As mentioned above, this paper is an application of ..."
**Answer.** We now place more experimental results in the rebuttal box at the top. We appreciate your valuable feedback, as it helps us improve the comprehensiveness and applicability of our experimental evaluation. More experiments are still on evaluated, and the results will be published as soon as possible and added to the **revised Manuscript and updated Supplementary Material**.
---
Rebuttal Comment 1.1:
Title: Feedback
Comment: I appreciate the solid feedback from the authors and apologize for missing the robustness results. However, the denoising results did not appear in the original manuscript after a double-check.
Going back to the first question, I think the authors might not really get my question, The purpose of interpolation should serve to the follow-up benefits, like faster, better performance, robustness, denoising, easy training, or something else related. I would hold a tolerant attitude towards the follow-up benefits if I reviewed this paper one or two years ago (since works like VARS already published). However, from the perspective of diversity and novelty exploration, your work actually have some differences with those sparse dictionary learning works. And I also agree novelty needs time (time of engineering) to match its real value. So, could you please justify the meaning of your work for the follow-up benefits, or some new insight that contribute the Transformer/Sparse dictionary learning communities?
The justification could be the discussion of future work about the reasonable potential of your method. For example, we know the sparse dictionary learning holds two main properties, 1) linear inverse, 2) sparsity, any follow-up benefits of your method could based on these two properties?
I will consider to change my score based on your response.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for careful review and thoughtful feedback. We will carefully revise the original manuscript and ensure that the necessary information is included to the final version! In response to the first question, we explain from two aspects:
- **Our exploration:**
- Upon studying and summarizing the general paradigm of computer vision attention and the concerns of biological vision in Section 3, we discovered that the crucial step in both our proposed module and existing successful computer vision attention modules involves **nonlinear decomposition and reconstruction**.
As highlighted by the reviewer, dictionary learning holds two key properties: **1) linear inverse, and 2) sparsity**/
- Specifically, dictionary learning aims to learn a set of atoms such that a given signal can be well approximated by a sparse linear combination of these atoms. Dictionary learning process can be viewed as a form of matrix decomposition with a sparse constraint, which is a NP-hard optimization problem. VAR formulated this optimization process as an ODE recurrent attractor network that captures attention. It also demonstrated that sparsity leads to attention emergence and improves model robustness. **In our paper, we propose an alternative attention method and an implementation of dictionary learning based attention module.** We employ Elastic Net regularization to enforce sparsity in the dictionary learning and sparse coding problem. The use of Elastic Net regularization transforms the problem from NP-hard to a convex optimization problem, for which a closed-form solution for the sparse representation exists.
- Our proposed module emphasizes the significance of the sparse constraint in enhancing the discriminative power of the learned dictionary, similar to the objective of traditional dictionary learning in solving visual tasks. **It is worth noting that numerous visual processing problems can be formulated as inverse problems, the two key properties possessed by dictionary learning solve them well.** Hence our proposed module reveals great potential in disentangling non-linear structural information. As the attention module plays a crucial role in Vision Transformers, models equipped with our proposed Dic-Attn module exhibit demonstrate accelerated training convergence and improved performance in inverse problems such as denoising and robustness, as well as other tasks including segmentation, and classification.
- **Follow-up benefits and possible future research:**
- **Conducting further investigation and validation of the diverse levels of semantics exhibited by dictionaries learned at different depths in NLP scenarios** would be valuable.
- Transformer/Sparse dictionary learning communities can further **investigate the trade-off between sparse reconstruction accuracy and the performance (robust, accuracy, generalization, ...) of the model**. Specifically, in this article, we achieved complete sparse reconstruction in the proposed module, and the attention map is subsequently derived from the task-driven selection matrix. Consequently, considering the large number of parameters of Transformer model, we can explore **the possibility of intentionally reducing the accuracy of sparse reconstruction within the attention module**.
- Additionally, when the learned dictionary as a feature base, the advantages of **low computation complexity and cost** overhead in sparse coding can facilitate future edge deployment and training. These topics have never been raised or discussed in related papers for the Transformer + dictionary learning communities.
Thanks again for your constructive comments! We truly appreciate your time and look forward to your considering raising your rating to a higher accept rating! | Summary: This paper introduces a new attention mechanism, dictionary learning-based attention (Dic-Attn), to replace existing attention modules (e.g., self-attention) in deep networks (e.g. Vision Transformer, ViT). The proposed Dic-Attn comes from the combination of dictionary learning and sparse coding, and sparse visual attention. The Dic-Attn module is used to replace self-attention in transformer models for image classification, point cloud classification, and image segmentation. Results show the Dic-Attn performs better than self-attention on these tasks.
Strengths: 1. This paper presents a very interesting idea to combine dictionary learning and sparse coding with attention mechanisms. Attention is proposed with the hypothesis that not all features are equal and we should only pay attention to important ones. The spirit is similar to sparse representation learning. The proposed Dic-Attn combines them in an elegant way.
2. The proposed Dic-Attn plugged into the existing models is shown to outperform self-attention for various computer vision tasks: image classification, point cloud classification, and semantic segmentation, with noticeable improvements.
3. The influence of hyper-parameters, dimension k and attention block number l, is studied and the results provide some insight of how they may affect the model performance. Essentially, larger k and larger l leads to better performance.
Weaknesses: 1. How does the backward process work in Algorithm 1? The sparse representation \phi in Eq. 3 is not differentiable.
2. What are the mIoU values of the proposed method and the counterpart using self-attention on the ADE20K dataset for semantic segmentation?
3. Since the paper proposes to learn sparse representation using dictionary learning-based attention, some example sparse representations should be shown in order to verify the effectiveness of the proposed method. While Fig. 3 visualized some attention maps for certain categories, the sparsity of the representation is still unknown.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the concerns above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not discuss the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and constructive suggestions. The following responses might help address your concerns and questions about this paper:
1. **Weakness 1.** How does the backward process work in Algorithm 1? The sparse representation $\phi$ in Eq. 3 is not differentiable.
**Answer.** We sincerely thank you for perusing this work. We now further clarify how the backward process works in Algorithm 1 in the **revised version** and express how to address the concern about the differentiability of the sparse representation $\phi$.
In Algorithm 1, the backward process refers to the optimization of the dictionary and the transformed vector/matrix by minimizing the total final loss. This process also involves updating the sparse representation $\phi$ while keeping the dictionary $D$ and the input visual features fixed at the value obtained in the previous step.
Note that, we employ Elastic Net regularization to enforce sparsity in the dictionary learning/sparse coding problem $||x - D\phi|| + \lambda ||\phi||_0$.
 
Elastic Net regularization transforms the problem from NP-hard to a convex optimization problem with a closed-form solution for $\phi$.
The key objective function for dictionary learning, which decomposes data into a dictionary and its corresponding sparse representation, can be expressed as follows:
$\min_{D, \phi}J_{Dictionary\ Learning},$ and $J_{Dictionary\ Learning} = ||x - D\phi||_2^2 + \left( \beta ||\phi||_1 + \frac{\lambda}{2} ||\phi||_2^2 \right),$
where $x\in R^{n}$ represents the input visual feature, $D\in R^{n\times k}$ represents the dictionary, $\phi \in R^{k}$ represents the sparse code, and $\lambda_1$ and $\lambda_2$ are hyper-parameters used to balance $l$-1 and $l$-2 regularization.
Hence, given an initial unreliable dictionary
$\mathbf{D} \in \mathcal{S}(n,k): =$ { $\mathbf{D}\in \mathbb{R}_{*}^{n\times k}:diag(\mathbf{D}^T\mathbf{D})=\mathbf{I}_k$ }.
Then, by fixing dictionary D, the objective for obtaining $\phi$ becomes a regularized least square problem. Specifically, let the derivative of the dictionary learning objective function with respect to $\phi$ is zero, i.e., $\frac{\partial J}{\partial \phi} = 0$. We can derive an analytical expression for the sparse coding vector $\phi$, as shown in equation 3 of Algorithm 1 in the main document:
$\phi^*_i = \left(\mathbf{D}^{T} \mathbf{D} + \lambda \mathbf{I} \right)^{-1} \left(\mathbf{D}^{T} x_i + \beta \mathbf{v} \right)$.
In summary, the sparse code $\phi$ is fixed to the value obtained in the previous step. The backward propagation updating strategy then optimizes the dictionary, the diagonal transformed matrix $\mathbf{W_D}$, and transformed matrix $W_{\Phi}$ by minimizing the total final loss. Subsequently, the dictionary $D$ and the input visual features are updated, and the corresponding sparse representations will be recomputed again.
 
2. **Weakness 2.** "What are the mIoU values of the proposed method and the counterpart using self-attention on the ADE20K dataset for semantic segmentation?"
**Answer.** We now place more experimental results in the rebuttal box at the top. More experiments are still on evaluated, and the results will be published as soon as possible.
 
3. **Weakness 3.** Since the paper proposes to learn sparse representation using dictionary learning-based visual attention, some example sparse representations should be shown in order to verify the effectiveness of the proposed method. While Fig. 3 visualized some attention maps for certain categories, the sparsity of the representation is still unknown.
**Answer.** We appreciate your careful comments. While Fig. 3 visualizes attention maps for certain categories, we acknowledge that the sparsity of the sparse representation $\Phi$ is not explicitly demonstrated in that particular figure.
In the **revised version**, we further include the results of hyperparameters $\lambda_2$ on the impact of the sparsity and accuracy of sparse representation. They show that the sparsity increases as $\lambda_2$ increases. According to [1, 2, 3] and existing results, it can be shown that the sparsity of the sparse representation enhances the discriminative performance of the dictionary as a feature base, while the task-driven transformation matrix selects well-disentangled features. Finally, this enables the model to capture more accurate visual attention and achieve better performance.
It is important to note that the primary focus of our paper is on introducing the concept of the dictionary learning-based visual attention module and showcasing its potential for capturing visual attention. Fig. 3 illustrates the ability of the proposed Dic-Attn module to capture relevant features for specific categories.
 
[1] Fuchs, J-J. "On sparse representations in arbitrary redundant bases." _IEEE transactions on Information theory_ 50.6 (2004): 1341-1344.
[2] Elad, Michael, and Alfred M. Bruckstein. "On sparse signal representations." _Proceedings 2001 international conference on image processing (Cat. No. 01CH37205)_. Vol. 1. IEEE, 2001.
[3] Wei, Xian, Hao Shen, and Martin Kleinsteuber. "Trace quotient with sparsity priors for learning low dimensional image representations." _IEEE transactions on pattern analysis and machine intelligence_ 42.12 (2019): 3119-3135. | Rebuttal 1:
Rebuttal: # More Experimental Results
## **Answer** to Related Questions including:
- Weakness 2 by Reviewer 2tL6
- Weakness 2 by Reviewer Vec1
- Weakness 2 by Reviewer nmhX
- Weakness 1 by Reviewer 1dqN
- Weakness 2 and Question 1 by Reviewer q2Dg
We appreciate your comments and your expectation for more explainable results or improved performance compared to the baseline. he following responses might help address your concerns about this paper:
**(1) Image Classification:** We have conducted additional experiments on the ImageNet-1k dataset for image classification. We use "Swin-T(WSA)" and "Swin-T(Dic-Attn)" to denote the Swin Transformer-Tiny model and Swin Transformer-Tiny model using the Windows Self-Attention module the proposed dictionary learning-based visual attention(Dic-Attn) module, respectively. Specifically, the added results of the classification task on ImageNet-1K are shown as follows:
#### **Table 1. Additional classification results on ImageNet-1K.** Models are trained from scratch on ImageNet-1K. The input images are all resized to $224^2$. Our proposed Dic-Attn module brings up 1.37% gains in Top-1 accuracy compared with backbone Swin-T(WSA) at the beginning of training (50 epochs).
|Methods|Dataset|Resolution|Acc (50 Epochs)|Acc|
|---|:-:|:-:|:-:|:-:|
|Swin-T(WSA)|ImageNet-1K|$224\times 224$|70.29|81.23|
| Swin-T(Dic-Attn) |ImageNet-1K |$224\times 224$|71.66|81.29|
**(2) Robustness:** Regarding your suggestion to explore the application of sparse dictionary learning algorithms in robustness or denoising tasks, we agree that such experiments could provide valuable insights into the capabilities of our proposed method. Actually, we have reported the model performance on adversarial robustness under the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) in Section 4.2.1 in the main document. We set the FGSM attack with the perturbation magnitude $\epsilon = \frac{4}{255}$ and set $\epsilon= \frac{8}{255}$ or PGD, with iteration numbers $t = 10$ and step size $\alpha = \frac{2}{255}$. We classify images corrupted by adversarial attack FGSM and PGD, respectively. Then we evaluate the model performance by their attack classification accuracy.
#### **Table 2. Robustness Evaluation Results on CIFAR-100 dataset.** Comparison of our approach with backbones and other attention baseline methods, against various adversarial attacks.
| Model | Attacks | | | # Param(M) |
| -------------- |:-------:|:----:|:-----:|:----------:|
| (From Scrach) | FGSM | PGD | Clean | |
| RVT (SA) | 0.20 | 0.10 | 64.80 | 8.29 |
| RVT (VARS-D) | 10.58 | 3.80 | 62.20 | 7.68 |
| RVT (Dic-Attn) | 10.00 | 4.30 | 55.93 | 7.48 |
**(3) Denoising:** In response to your feedback, we have conducted experiments on the SIDD dataset for image denoising. The results of these experiments show that the transformer augmented with our proposed module outperforms the baseline in denoising performance.
#### **Table 3. Additional classification results on SIDD.**
| Methods | PSNR | SSIM |
| ------------------- |:----:|:----:|
| Restormer(SA) | 39.53 | 0.960 |
| Restormer(Dic-Attn) | 39.54 |0.958|
**(4)Inference Time:** Specifically, we add the Inference Time test to compare the efficiency of various attention modules explicitly. The added results of inference time are shown as follows:
#### **Table 4. Comparison of the inference time cost (ms) of various attention modules.**
| Methods/Indicators | GPU Inference Time (ms) |
| ------------------ |:-----------------------:|
| SA | 82.20 |
| DA | 64.40 |
| $A^2$ | 8.00 |
| HAM | 7.70 |
| ACF | 22.60 |
| Dic-Attn(Ours) | 24.70 |
The added results report the inference time of our proposed module and the baseline models. We have added to Table 4 in the revised version, highlighting the advantages and disadvantages of our proposed model in terms of computational efficiency.
We appreciate valuable feedback, as it helps us improve the comprehensiveness and applicability of our experimental evaluation. More experiments are still on evaluated, and the results will be published as soon as possible and added to the **revised Manuscript and updated Supplementary Material**. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a novel attention mechanism called Dic-Attn that enhances the performance of deep vision models on various computer vision tasks. Specifically, the Dic-Attn module allows for the disentanglement of underlying nonlinear structural information in visual data, providing an intuitive and elegant way to exploit discriminative information and provide visual attention. Moreover, the proposed method uses shallow-depth attention modules, which are computationally efficient and can be easily integrated into existing deep learning architectures. The paper presents experimental results on several computer vision tasks, demonstrating the effectiveness of the proposed method and its superiority over several state-of-the-art methods.
Strengths: - The proposed method is intuitive and novel. It combines the advantages of dictionary learning with deep neural networks. Compared to previous related methods like EMANet and HamNet, it cleverly avoids the gradient problem in the iterative procedure. From the methodology perspective, this method is beyond the acceptance bar.
- The proposed Dic-Attn module achieves promising performance on various computer vision tasks, including image classification and point cloud classification. Besides, the proposed method shows its efficiency over several attention methods on these tasks. For example, the paper compares the proposed method with several existing attention mechanisms, such as SE-Net, CBAM, and ECA-Net, and shows that the proposed method achieves better performance in terms of accuracy and efficiency.
- The paper provides a detailed analysis of the proposed method, including ablation studies and visualization of attention maps, to better understand its behavior and performance. For example, the paper shows that the proposed method can capture both local and global features of the input data and that the learned attention maps are interpretable and meaningful.
- The paper presents experimental results on several computer vision tasks, demonstrating the effectiveness of the proposed method and its potential for real-world applications. For example, the paper shows that the proposed method can be used for object recognition, scene understanding, and 3D point cloud classification, which are important tasks in computer vision.
Weaknesses: # Inadequate referencing
The current manuscript overlooks referencing certain pertinent studies. The proposed method, while distinct in its own right, exhibits conceptual parallels with several prior works that should be acknowledged. Comparison to these similar works, which are neither referenced nor discussed, would greatly augment the overall discourse.
References:
1. Zhu Z, et al. Asymmetric non-local neural networks for semantic segmentation[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 593-602.
2. Li X, Zhong Z, Wu J, et al. Expectation-maximization attention networks for semantic segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 9167-9176.
# Limited results presentation
I found the experimental results to be somewhat restrictive, with only one large-scale benchmark being demonstrated. For a more comprehensive understanding of the study's impact, I recommend showcasing the complete ADE2K results within the supplementary materials, instead of merely citing a single score within the body of the text.
# Visualization and explanation disparity
While Figures 3 in the main document and 1 in the supplementary material do an effective job of portraying the inner workings of the proposed method, the related explanation (lines 336-338) lacks clarity. Determining which attention map is closer to human visual attention or identifying the "referred objects" proves challenging due to the blandness of the given explanation. This area requires more in-depth exploration and improved elucidation to make the understanding of the model's performance more intuitive.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For W_D and W_scale, why choose them as vectors instead of matrices?
Are there any ablation study to verify the advantages of this choice?
Besides, if they are vectors, they should be in bolded lower case.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful review, positive feedback and constructive suggestions. The following responses might help address your questions about this paper:
1. **Weakness 1.** "**Inadequate referencing...**
**Answer.** Thank you for your feedback. We have indeed reviewed the two papers you mentioned. To address this concern, we revised the manuscript to include a discussion and made sure to properly cite these related papers in the paper, so that readers can easily access and review the relevant literature.
[1] takes advantage of the non-local network is potent to capture the long-range dependencies that are crucial for semantic segmentation, and relieves the shortcoming of non-local block. [1] reduces the middle dimension of the key and value in the attention without affecting the final output dimension, $R^{N\times N} \times R^{N\times C}\rightarrow R^{N\times S} \times R^{S\times C} \rightarrow R^{N\times C}$. In order to improve performance, [1] also adopts the Spatial Pyramid Pooling method to make multi-scale fusion with different $C$. It inspires us that multi-scale dictionaries may have potential for future research.
[2] proposed a new attention mechanism for semantic segmentation, called Expectation Maximization Attention (EMA). The expectation (E) step works as estimating the expectation of the attention map and maximizing (M) step functions as updating the bases by maximizing the complete data likelihood. The output can be computed as the weighted sum of bases, where the weights are the normalized final attention maps.
Our method is applying the dictionary learning method to obtain a set of basis vectors that can be used to represent data in a more efficient and compact way. The training of the task-driven dictionary atom transformed/weighting matrix $W_D$ can also be considered as a process of expectation maximization.
We appreciate your concern about the lack of references to certain pertinent studies in the current manuscript and your suggestion to include a comparison with these similar works.
 
[1] Zhu Z, et al. Asymmetric non-local neural networks for semantic segmentation[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 593-602.
[2] Li X, Zhong Z, Wu J, et al. Expectation-maximization attention networks for semantic segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 9167-9176."
 
2. **Weakness 2.** "**Limited results presentation...**
We now place more experimental results in the rebuttal box at the top. More experiments are still on evaluated, and the results will be published as soon as possible and added in **the revised version**.
 
3. **Weakness 3.** "**Visualization and explanation disparity...**"
**Answer.** Thank you for this valuable suggestion. We agree that visual interpretations can help readers better understand the experiments' results. In the **updated version**, we do the following two revisions.
(1) We further clarify that the output Attention Maps are obtained through the reconstruction of the re-weighted dictionary and transformed sparse representations.
The diagonal transformed matrix $\mathbf{W_D}$ and transformed matrix $\mathbf{W_{\phi}}$ are specifically designed to re-weight the dictionary and transform its corresponding sparse representations, respectively. $\mathbf{W_D}$ operates in a vector-wise manner, where each element in the diagonal serves as a weight for the dictionary columns, i.e., atoms. On the other hand, $\mathbf{W_{\phi}}$ implements the sparse encoding of individual elements. These two parameters are updated via the backpropagation strategy and are thus driven by the task's final objective function. The dictionary $\mathbf{D}$ belongs to $\mathcal{S}(n,k): =$ { $\mathbf{D}\in \mathbb{R}_{*}^{n\times k}:diag(\mathbf{D}^T\mathbf{D})=\mathbf{I}_k$ } and the corresponding sparse representations is subject to the Elastic Net constraint, they are also data-driven.
(2) The output Attention Maps demonstrate that our proposed attention module exhibits sharper and more precise object boundaries, enhancing the segmentation accuracy of the model. Based on these observations, we infer that the advantage of our proposed attention module stems from the pipeline our proposed attention module introduces. It is suggested that the potential non-linear structural information disentanglement ability of dictionary learning can improve performance during segmentation.
 
4. **Question.** "For W_D and W_scale, why choose..."
**Answer.** We sincerely appreciate your careful review and constructive suggestions sincerely this paper. The typo has been corrected in the **revised version**.
The diagonal transformed matrix $\mathbf{W_D}$ and matrix $\mathbf{W_{\phi}}$ are specifically designed to re-weight the dictionary and transform its corresponding sparse representations, respectively.
(1) $\mathbf{W_D}$ operates in a vector-wise manner, where each element in the diagonal serves as a weight for the dictionary columns, i.e., atoms.
(2) On the other hand, $\mathbf{W_{\phi}}$ implements the sparse encoding of individual elements.
These transformation parameters are driven by the task objective function and are updated through the backpropagation process. | null | null | null | null | null | null |
Geometry-Aware Adaptation for Pretrained Models | Accept (poster) | Summary: In this paper, the authors explore the adaptation of pretrained models from a geometric perspective. Specifically, this paper leverages the label information as a metric space and proposes a simple approach to predict new classes based on the pretrained models’ zero-shot prediction without any further training. This paper also provides some theoretical analysis w.r.t learning-theoretical results, locus cover and active next-class selection. In the empirical results, this paper conducts experiments on several datasets and verify the effectiveness of the proposed method.
Strengths:
- The paper is generally well-written and easy to follow.
- The topic of adaptation on the pretrained models from the geometric perspective is interesting.
- The proposed method is simple and efficient.
- The main claims are generally supported by theoretical results.
Weaknesses: - The proposed method may require some additional information such as label relationship, which may not be applicable in some real-world situations.
- It lacks theoretical analysis about the superiority of the proposed LOKI.
- The empirical results are not adequate to support the main claims.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - As discussed in the Weaknesses, it is not clear whether the proposed LOKI is the optimal geometric adaptation method for enabling the prediction of unobserved classes. Are there any alternatives for Fréchet mean estimator to yielding more reliable predictions? (An example) It would benefit a lot if some empirical or theoretical comparison are added to demonstrate the effectiveness of the proposed LOKI.
- I appreciate this paper that most claims are supported by theoretical results. It would further improve the paper if more empirical results were added to support these claims. For example, some toy experiments can be provided to verify Theorem 4.8.
- In Table 1, the effectiveness of LOKI is explored on CIFAR-100. It would be more convincing to conduct experiments on more real-world datasets.
- The experiments in Table 1 seems not comprehensive. From the results, we can see that LOKI presents limited improvements on ViT-L-14 backbone. It would be better to add more discussions and more comprehensive empirical results.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! We have responded to your points below.
**On the usage of the Fréchet mean estimator**
We made the choice to use the Fréchet mean because we have information about the relationships between classes. To ground our rationale in a different type of problem, consider a regression task: the output is a **scalar,** and it is possible to compare labels (e.g., one is larger than another, etc.). Labels can also be averaged, and indeed, the Bayes optimal linear predictor is just such an average—a mean.
However, in our setting, we are in a far more general case than continuous spaces like $\mathbb{R}$. In fact the only information we have access to is relationships between classes, which can be expressed as distances. So we need an analogue to the mean that operates using only distances, and the canonical analogue is the Fréchet mean. Indeed, it has beautiful theoretical [1] and optimization [2] properties. Other variations of the weighted Fréchet mean have been used in supervised learning in structured prediction settings [3], although since our problem does not permit training, we require a novel approach.
However, it is not the only possible choice. For example, taking an exponent of 1 instead of 2 on the distance term produces the Fréchet median, which is known to have excellent robustness properties, but lacks some of the theoretical properties we rely on. Our framework is flexible—we could obtain a new version of Loki that uses such an object as well. **We include new experimental results that evaluate the effect of changing the exponent**; please see Table 2 in our new results.
**On experiments to verify Theorem 4.8**
We agree and indeed our submission already includes exactly this! Figure 2 (right) is a synthetic experiment that shows how Theorem 4.8 can be used to optimally increase the size of the locus for tree graphs. In addition, since submission, we have included a new experiment that verifies Theorem 4.8 on ImageNet – a much more realistic setting.
**On CIFAR-100 and more real world datasets**
Our submission includes three real-world datasets in addition to CIFAR-100: ImageNet, PubMed, and LSHTC. While ImageNet (ILSVRC 2012) includes 1,000 classes, PubMed includes nearly 10,000, and LSHTC includes over 325,000 classes.
**On limited improvements using the ViT-L-14 backbone**
We were excited to find that there are substantial gains to be had by applying Loki to a variety of pretrained models from different settings: supervised models, self-supervised models, and zero-shot models including CLIP and ALIGN. We hypothesize that even greater gains can be obtained for the ViT-L-14 CLIP backbone if we were to use a more refined approach for extracting the metric space or tune the softmax temperature.
[1] https://arxiv.org/abs/1609.03045
[2] https://arxiv.org/abs/2003.00335
[3] https://arxiv.org/abs/1605.07588
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Dear authors,
Thanks for the detailed replies. After reading the rebuttal and other reviews, my concerns have been adequately addressed. I would like to raise the score to '5'.
Best,
Reviewer dZLD
---
Reply to Comment 1.1.1:
Comment: We are excited to include these new results in the final version! Thank you for raising these points in your review, and we appreciate your engagement with our rebuttal! Please let us know if there are any questions we can answer about our new results or anything else about the paper. Thanks! | Summary: This paper explores the concept of geometry-aware adaptation for label spaces. The paper introduces a method called "LOKI" that allows pretrained models to make predictions for classes that were not observed during training. LOKI utilizes metric space information to adapt the model's predictions to unobserved classes.
The paper discusses the theoretical foundations of LOKI, including the definition of loci and identifying locus covers in graphs. It presents algorithms for efficiently computing the locus and describes an active learning-based strategy to select the next class for observation in order to maximize the size of the locus.
Experimental results demonstrate that LOKI improves the performance of zero-shot models even without external metric space information. It adapts to label spaces with a large number of unobserved classes and outperforms baseline models in terms of mean squared distances. The paper also validates the effectiveness of the active next class selection approach, showing that it leads to larger loci compared to random selection.
Strengths: 1. Originality: The paper exhibits a high degree of originality in several aspects. Firstly, it introduces the concept of geometry-aware adaptation for label spaces, which is a novel approach to address the challenge of predicting unobserved classes in pretrained models. The formulation of LOKl and its integration with metric space information is a unique contribution. Additionally, the paper presents novel definitions, algorithms, and strategies related to loci, identifying locus covers, and active learning-based class selection. These original contributions set the paper apart and make it a valuable addition to the field.
2. Clarity: The paper excels in terms of clarity, making the research accessible to a wide range of readers. The authors provide clear explanations of concepts, definitions, and algorithms, ensuring that readers can understand the technical aspects of LOKI. The paper is well-structured, with logical flow and section organization.
3. Quality: The paper demonstrates a high level of quality in terms of its theoretical foundations, experimental methodology, and presentation of results. The authors provide a thorough and rigorous analysis of the problem, offering formal definitions and proving relevant theorems.
4. Significance: The paper holds significant importance in the field of machine learning and predictive modeling. It addresses a challenging problem of predicting unobserved classes in pretrained models, which has practical implications in various real-world scenarios. The introduction of LOKI and its ability to adapt predictions using metric space information expands the capabilities of pretrained models, enabling them to make more accurate predictions for classes that were not observed during training. The experimental results demonstrate the effectiveness of LOKI and its superiority over baseline models, further highlighting its significance in advancing the field.
Weaknesses: 1. Lack of Comparative Analysis: While the paper presents LOKI as a novel approach, it would benefit from a more comprehensive comparative analysis against existing methods or related works. Providing such a comparison would strengthen the paper's argument for the originality and effectiveness of LOKI.
2. Scalability Analysis: The paper briefly mentions the computational complexity of LOKI, but a more detailed analysis of its scalability would be valuable.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. While the paper briefly mentions the computational complexity of LOKI in term of label spaces, will time consumption of this method be also related with higher-dimensional metric spaces? When compared with original method without LOKI, how much is extra consumption of memory and runtime with LOKI?
2. When multiple external metrics are provided, can we decide which metric is most suitable in advance?
3. Is the property pairwise decomposable necessary for the improvement of LOKI? Does it only work on the dataset satisfying this property?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No potential negative societal impact of their work. Suggestions are provided in the weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for **noting the originality, clarity, quality, and significance of our work!**
**On comparative analyses**
We are unaware of other approaches that operate in our setting: adapting a pretrained classifier to enable the navigation of metric spaces without any additional training. **However, we can evaluate alternatives to Loki** – Loki replaces the $\text{arg max}$ with the Fréchet mean, such as the Fréchet median. We compared four alternatives to Loki in our new experimental results (see Table 2), and we found that Loki was optimal for minimizing the expected squared distance.
**On scalability analyses and runtime**
We demonstrate that Loki can be scaled up to the ~325,000-class LSHTC dataset by using approximations such as graph summarization. We will provide a more thorough ablation of the scalability of Loki in the final version of the paper.
**On choosing the best metric**
This is an important point! Typically, the answer would be to evaluate different metrics on a held-out validation set, but in this setting, the expected squared distance itself depends on the metric space. In order to make comparisons between metric spaces, we need the errors to be on the same scale. We provide new results that normalize all of the expected squared distances by the squared diameter of the metric space, bringing all of the errors onto the same 0-1 scale, and allows us to compare with other performance measures such as the 0-1 error. This enables tuning the choice of metric space as a hyperparameter – in the case of CLIP applied to CIFAR-100, we find that the WordNet tree is the optimal metric space of the ones that we evaluated.
**On whether or not pairwise decomposability is necessary for Loki**
Pairwise decomposability is only needed for efficient computation of the active data selection strategy. Loki itself does not depend on this property – Loki can be applied using a linear transformation of the softmax probabilities regardless of the properties of the metric space or loci.
---
Rebuttal Comment 1.1:
Title: Reaction to authors' response
Comment: Dear reviewer 2m87. Has the authors' response answered your questions? Is there any other clarification you would want to request from the authors before the discussion period ends?
AC | Summary: The authors consider the problem of predicting examples from unseen but "known" classes. Taking inspiration from structured prediction, the paper intends to exploit the knowledge of structure in the full label space. The authors propose an alternative to the popular "argmax-over-logits" prediction, called Loki, by computing the Frechet mean with an appropriate distance metric in label space. A theoretical result on a simple logistic data model shows the sample complexity bound with Loki. Also considered is the important question of what training labels are necessary and sufficient to be able to span the whole label space during prediction. On the experimental effectiveness, the authors demonstrate reasonable percentage point improvement over baseline "argmax" standard prediction.
Strengths: Strengths:
1. My knowledge in structured prediction is limited. However, it seems to me that the results in this paper are significant and interesting
2. I found the method Loki to be a neat approach to predicting unseen classes, when the distance metric for th label space is known. It is a scalable method since the computation involved is a fixed linear transformation
3. The authors have stated and and given proofs for the minimal locus cover for two non-trivial label space structures with clear motivation - phylogenetic trees and grid graphs
4. An algorithm of a "greedy" nature through an active-learning framework is then given, which is polynomial time.
5. Experimental results: it seems that Loki achieves strong results for CLIP
6. While I have not checked the proofs in detail, the methodology appears sound and rigorous.
Weaknesses: Weaknesses:
1. It is not clear to me if there can be guarantees provided when the true label space metric is not available or only approximately known
2. Theorem 1 is for the case of logistic regression. It is not clear whether this is a powerful enough model for practice.
3. I have either missed this part or there needs to be a detailed discussion of the implications of Theorem 1
4. I suggest providing "error bars" or significance level for the numbers reported in Table 1
5. More experimental details would be helpful for reproducing the results
6. It is unclear to me if there are other competitive structured prediction techniques applied to pretrained models and what gains Loki makes over them, if any.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions:
1. Line 210: "In cases involving an extremely large number of classes, it is desirable to use LOKI on the smallest 211 possible set of observed classes" - is this because sample complexity of in Theorem 1 scales as K^2? Are there other considerations?
2. I would prefer to get more explanation for the proposed use-case of grid graphs. While phylogenetic trees is more intuitive, grid graphs might benefit from a simple examples
3. Does a counterpart of Theorem 4.8 hold for grid graphs?
4. Sec 5.2: Is one-vs-rest the most competitive baseline?
5. For the result on computing loci for partially oserved label spaces, are PubMed and LSHTC the standard datasets to consider? Please provide any reasoning behind this choice. Any references will be useful
6. Fig 1 (and related ones in Appendix)- it is not immediately clear what the authors intend to demonstrate here.
7. I am curious about the impact of Loki on calibration.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations:
I did not see an explicit discussion on the limitations and scope for improvement
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and for **noting the significance, theoretical contributions, and scalability of our work!**
**On the true label space metric vs. approximations**
Excellent question! We find that there is not a single “correct” metric space for a given problem, and that many approximations can be made without significantly impacting performance (e.g. deriving a metric from pretrained class embeddings, graph summarization, or minimum spanning tree approximations). We hypothesize that this phenomenon is due to the presence of redundancy within many of the objects commonly used in ML (representations, graph structures, etc.). This redundancy enables a variety of potentially distinct metric spaces to contain enough geometric information to be successfully used by Loki.
**On logistic regression in practice**
Note that this logistic regression result (LR) holds in practice for a variant of our SimCLR experiments, which involves applying it to self-supervised feature embeddings. While simple, this result applies to the broad range of modern self-supervised learning techniques that use logistic regression as part of the learning pipeline. We believe that the result can also be extended beyond LR to richer distributions and model classes, but the machinery involved in such an analysis becomes more complex **without shedding additional light on the fundamental tradeoffs.** For example, we could perform a similar analysis for two-layer neural networks, but existing results, which must serve as a component in our result, are often unwieldy and only obscure the roles played by key quantities (e.g., the graph diameter).
**On implications of Theorem 1**
Thank you for bringing this up – we added a detailed discussion of Theorem 1 in the updated draft. In short, we can bound individual errors in expected squared distance as a function of the squared diameter of the graph, the number of observed classes, the dimension of the features used to train the logistic classifier, the number of samples, and a problem-dependent constant. The squared errors relate to the squared diameter of the graph, as this is the maximum value that they can take on. The bound improves as we obtain more samples—as we would expect. Furthermore, since the bound has a quadratic dependence on K, we can also improve the bound if our classifier is trained on fewer classes.
**On error bars and significance of Table 1**
The results of Table 1 are deterministic, as we cannot retrain CLIP over multiple seeds, although we will provide error bars for our other experiments in the final version.
**On more experimental details and reproducibility**
We absolutely agree! We will include full experimental details and release code reproducing the results in the final version.
**On other competitive structured prediction methods using pretrained models**
We are unaware of other structured prediction approaches that operate in our setting: adapting a pretrained classifier to enable the navigation of metric spaces without any additional training. **However, we evaluated alternatives to Loki**, such as the Fréchet median (see common response).
**On applying Loki to the smallest number of observed classes.**
The reason for applying Loki to a small set of classes is twofold – practical usage (i.e., not needing to train a classifier on the entire, potentially large label space), and theoretically as a property of our bound, as you correctly pointed out. Consider the example of 2D grid graphs – in order to predict an arbitrarily large grid, we only need the four corners to form an identifying locus cover. In other words, we can predict an arbitrarily large number of classes using only a four-class classifier.
**On real-world use-case of grid graphs**
Problems in which classes represent points in space can be represented as grid graphs, e.g., predicting locations on a map, predicting the next move in a board game such as chess, and predicting atoms in a lattice where edges represent physical interactions [1].
**On Theorem 4.8 for grid graphs**
For bounded 2D rectangular grids, the result is trivial – over a constant four rounds of active selection, select the four corners. This leads to an identifying locus cover. If we relax the requirement that each node can be predicted uniquely, then this can be done in only two rounds of active selection by selecting opposite corners. This forms a minimal locus cover. We will include this example in the final version of the text.
**On the one vs. rest baseline**
We used one vs. rest logistic regression for its simplicity, although a similar result can be obtained using multiclass logistic regression. Applying logistic regression to self-supervised embeddings is typical for self-supervised learning approaches.
**On reasons to compute loci on partially observed label spaces such as PubMed and LSHTC**
We used these datasets because they 1. Have a large number of classes and 2. Either have a native metric space structure (LSHTC) or have class names where a metric space can be derived easily using pretrained class embeddings (PubMed). To our knowledge, there is no prior work on computing loci of label spaces in the machine learning literature from which we could have obtained datasets.
**On the purpose of Figure 1**
Fig. 1 visualizes the locus of 100-node graphs when applying Loki to a 3-class classifier. For example, the leftmost simplex represents prediction using the argmax – in this case, there are three equally-sized regions corresponding to the Softmax outputs of the 3-class classifier. We will clarify this in the final version.
**On calibration**
This is an excellent point! We have included a new analysis of the effect of calibration, as described in the common response.
[1] https://arxiv.org/abs/2010.09990
[2] https://arxiv.org/abs/1706.04599
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications and additional supporting results
Comment: I would like to thank the authors for their detailed and useful response. The calibration results are interesting and would be a good addition to the paper, providing useful guidance on tuning the temperature hyperparameter. I do not have further questions at this time and would like to keep the score of "accept".
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our rebuttal, and for your positive feedback! We are excited to include the temperature scaling results in the final version of the paper as we believe that your suggestion to study calibration revealed an exciting aspect of our work. Please let us know if any additional questions come up about our new results or anything else about the paper. Thanks! | Summary: This paper considers the zero-shot model adaptation to testing tasks that include classes not seen during training. The authors propose a post-hoc method called LOKI, which applies a class-graph-based transformation to make predictions on these unseen classes. The authors also provide the theoretical analysis and experimental results to support the proposal.
Strengths: The problem considered in this paper is significant in the machine learning community, and the proposed solution is technologically reasonable.
The theoretical results are interesting and provide a solid justification for the proposed solution.
The paper is well-written and easy to follow.
Weaknesses: The graph of LOKI seems heavily rely on the prior knowledge of the classes.
In the Sections 5.1 and 5.2 of this paper, experiments were conducted on CIFAR100 and ImageNet, respectively. Are there any additional results on other datasets?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How can a reliable graph be obtained in real-world applications? The experiments implement it using hierarchical trees and WordNet. Does this limit the practical application of the proposal?
---
Thanks for the detailed clarifications, which have addressed my concerns. I would like to keep my score.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not provide a discussion about the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and for noting the significance of the problem considered in our work! We have clarified points about our experimental results and the metric spaces that we use below.
**On prior knowledge of the classes**
Metrics relating the classes are often readily available, but **even when they are not, our method still works**. Our experimental results in Section 5 include two such examples:
- In Table 1, the “Internal” metric space refers to using class embeddings from the pretrained CLIP model itself, and
- In Table 2, the metric space for our PubMed results is derived from SimCSE embeddings.
In both of these cases, Loki improves over the baseline, which confirms that prior knowledge of the classes is not required.
**On experimental results beyond CIFAR-100 and ImageNet**
Yes, our submission contains experiments on PubMed and the LSHTC datasets, please see Tables 2 and 3 in our submission. Both **datasets contain substantially more classes than CIFAR-100 and ImageNet (ILSVRC 2012)**, with PubMed containing almost 10,000 classes and LSHTC containing 325,000 classes. We chose these datasets in order to have diversity both in terms of the data domain and the size and structure of the metric space.
**On obtaining metric spaces in the real world**
Great question! Our experiments include graphs from a variety of sources that are **not limited to hierarchies.** For example, our PubMed and CIFAR-100 experiments include metrics that were derived from models where we naively applied the Euclidean distance to off-the-shelf embeddings and internal representations of the class names. The resulting metric spaces are not hierarchical. Obtaining these graphs simply required applying a standard procedure rather than needing any specialized domain knowledge. This procedure can be followed in any setting where embeddings are available.
A natural question to ask is whether different approaches to obtaining graphs lead to similar results. Excitingly, in our CIFAR-100 experiments, **we show that Loki can lead to improved performance across three different metric spaces for the same problem,** suggesting that Loki is robust to changes in the metric space.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed clarifications, which have addressed my concerns. I will keep my score and recommend acceptance of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our rebuttal, and for your positive feedback! Please let us know if there are any questions we can answer about our new results or anything else about the paper. Thanks! | Rebuttal 1:
Rebuttal: We thank all reviewers for their comments! We particularly appreciate reviewers **praising the significance of our work** (reviewers cqEn, 4Uud, and 2m87) and **praising the simplicity of our method** (reviewers zgY7 and dZLD).
Given the novelty of our problem setting and approach, reviewers had several questions and suggestions for experimental improvements before recommending acceptance.
These helpful comments have significantly improved the quality of our submission, and we respond to each of them individually below. **We are confident that our work offers an exciting and impactful new problem that invites future study and practical application, while introducing a strong baseline--Loki**.
Before we proceed with these individual responses, we **highlight new results** added to the paper since submission, including several responding to reviewers’ questions and suggestions. These include a variety of exciting new experimental results that have significantly improved the paper:
### New results
1. **New Calibration Analysis** (reviewers zgY7 and 4Uud). We perform this analysis on CIFAR-100 with the CLIP-ResNet-50 model. Calibration uses softmax temperature scaling [1]. Our results are shown in Figures 1 and 2 of our attached document. In Figure 1, we provide reliability diagrams and plots navigating the tradeoff between the expected calibration error (ECE). From these experiments, we **obtain the following new insights**:
- the optimal Softmax temperature for Loki is close to the optimal temperature for calibration,
- tuning the temperature can lead to improvements in Loki’s performance, and
- CLIP probabilities were already well-calibrated.
In Figure 2, we scale the softmax temperature to navigate the tradeoff curve between the 0-1 loss and the expected squared distance on which Loki is evaluated. **For two of the metric spaces, we find that temperature scaling can lead to higher accuracy using Loki,** and even when this is not the case, temperature scaling can be used to trade off between the two evaluation metrics. We appreciate the suggestion from the reviewers; this analysis revealed the softmax temperature as an important hyperparameter for improving the results from Loki.
2. **Cross-Metric Space Comparison via a New Evaluation Metric** (reviewers zgY7 and 2m87). Expected squared distances cannot be directly compared across metric spaces, as they may be on different scales. Our solution is to use a form of normalization: we divide the expected squared distance by the square of the graph diameter. This brings all of the values to the 0-1 range, and since $\mathbb{E}[d^2(y, \hat{y})] / \text{diam}(G)^2$ indeed also generalizes the 0-1 error, this enables comparison between 0-1 errors and those from different metric spaces. We provide these results in Table 1, again for our CLIP experiments on CIFAR-100. This new evaluation metric enables us **to determine which metric spaces have geometry best 'fit' to our pretrained models**. For example, for CIFAR-100, we observed that the WordNet metric space resulted in the lowest error and has the best geometry.
3. **Comparison of Loki Alternatives** (reviewers zgY7, 4Uud, and dZLD). Loki is based on the Fréchet mean, which is defined as $arg min_{y \in \mathcal{Y}} \sum_{i=1}^K P_{\lambda_i|x} d^2(y, \lambda_i)$. However, this is not the only approach that can be considered. For example, the Fréchet *median*, often used in robust statistics, is defined as $arg min_{y \in \mathcal{Y}} \sum_{i=1}^K P_{\lambda_i|x} d(y, \lambda_i)$, without squaring the distances. More generally, we can define $arg min_{y \in \mathcal{Y}} \sum_{i=1}^K P_{\lambda_i|x} d^\beta(y, \lambda_i)$, and evaluate different choices of $\beta$. We conduct this experiment on ImageNet using SimCLR as our pretrained classifier with 250, 500, and 750 randomly selected classes. From this analysis, we conclude that using the **Fréchet mean is the optimal formulation for Loki**.
4. **Evaluation of Active Next-Class Selection Procedure on ImageNet** (reviewer zgY7). We had previously validated Theorem 4.8 in our submission using a toy example and showed that the size of the locus indeed increases optimally. In our new experiment, we validate that this optimal increase in locus size indeed results in improved performance over selection rounds. We do so on ImageNet (ILSVRC 2012) using SimCLR, beginning with 500 randomly sampled classes. Our passive baseline randomly selects a new class at each round, and our proposed approach actively selects the optimal next class at each round. Our results are shown in Figure 3. Over 50 selection rounds, we found that our active approach led to consistent and significant gains over the passive approach in expected squared distance. From this analysis, we have further evidence that our **active selection approach is effective for minimizing the expected squared distance over selection rounds**.
Please let us know if we can answer any questions about these new results! We are excited to engage with you all during the discussion phase.
[1] https://arxiv.org/abs/1706.04599
Pdf: /pdf/81650dda6b139aa2efac536b23976bdb61bd76d6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a method capable of adapting a pre-trained model to a larger label space. To achieve this, it leverages the information of geometric distances between labels within the target larger space and replaces the common argmax operation with the Frechet mean. The paper also includes several theoretical analyses on sample complexity and optimal label subspaces.
Based on the proposed locus definition, the paper presents an active learning paradigm with the objective of expanding the maximum label coverage by introducing new labels. The empirical results demonstrate that the proposed method outperforms the vanilla baseline method in tasks of zero-shot classification and partially-observed label spaces.
Strengths: - the usage of Frechet mean is simple and intuitive for solving partial label classification tasks
- the theoretical analyses are thorough and address several key questions in terms of sample complexity and optimal label subspace
Weaknesses: - Evaluation Fairness: The empirical results from Table 1 and 2 are reported as $E[d^2(y, \hat{y})]$ in the respective metric space that Loki is optimized for. It is not too surprising that Loki outperforms the argmax/one-vs-rest baseline trained without such information. The paper lacks more fair metrics like accuracy or a held-out metric space to demonstrate the overall effectiveness of Loki.
- Stronger Baselines: Both zero-shot classification and partially-observed classification experiments only include vanilla baselines (e.g., CLIP argmax, SimCLR with one-vs-rest classifier). Considering the proposed method is pretty simple, the paper should compare against stronger baselines [1][2][3][4] in terms of performance and efficiency. Additionally, the claim that graph neural network architectures are heavyweight and challenging to scale up to extremely large graphs, where Loki performs better, needs quantitative measurements to support it (e.g., running efficiency, model size, etc.).
- Uncalibrated Uncertainty in Loki Formulation: Often, the probability of outputs from classification is uncalibrated [5], and the paper lacks an ablation study examining how the correctness of uncertainty $P_{\lambda}$ affects the final performance.
- Effectiveness of Active Learning: While the final experiment indicates that the proposed active learning paradigm successfully leads to larger loci, there is no guarantee or proof on how well the paradigm affects the final performance in classification tasks.
- Clarity: Section 4.2 is too wordy; consider emphasizing key statements and bringing several key algorithms from B.2, B.3 to the main text for better clarity.
[1] X. Wang, Y. Ye, and A. Gupta. Zero-shot recognition via semantic embeddings and knowledge graphs. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6857–6866, Los Alamitos, CA, USA, jun 2018. IEEE Computer Society.
[2] Abhinaba Roy, Deepanway Ghosal, Erik Cambria, Navonil Majumder, Rada Mihalcea, and Soujanya Poria. Improving Zero-Shot Learning Baselines with Commonsense Knowledge. Cognitive Computation, 14(6):2212–2222, November 2022.
[3] Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 951–958, 2009.
[4] Junyu Gao, Tianzhu Zhang, and Changsheng Xu. I Know the Relationships: Zero-Shot Action Recognition via Two-Stream Graph Convolutional Networks and Knowledge Graphs. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):8303–8311, July 2019.
[5] Nixon, Jeremy, Michael W. Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. "Measuring Calibration in Deep Learning." In CVPR workshops, vol. 2, no. 7. 2019.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As mentioned in the weakness section, here are few key questions left to answer:
- How does Loki perform in terms of accuracy?
- How does Loki perform compared to stronger baselines from recent literatures?
- How does uncalibrated probability affect the final performance?
- How useful is the proposed active learning in terms of final classification performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately discussed several limitations and negative societal impact in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful comments, and for praising the simplicity of our method and the thoroughness of our theoretical analysis!
**On evaluation fairness:**
This is an excellent point – we have added experimental results that address it in two ways.
- First, in our new calibration results, we were excited to find that if the softmax temperature is tuned, **Loki can attain a better 0-1 loss than argmax prediction!** This result is illustrated in Figures 1 and 2 of our new results.
- Second, we provide new results for our CLIP experiments that use a normalized version of our metric that allows us to **compare across metric spaces**: $\mathbb{E}[d^2(y, \hat{y})] / \text{diam}(G)^2$. This metric allows us to compare across metric spaces by bringing all of the errors onto the same scale, and it also generalizes the 0-1 error when the complete graph is used, allowing us to compare against accuracy. See Table 1 in our new results. This new evaluation metric enables us to determine which metric spaces have geometry best ‘fit’ to our pretrained models. For example, for CIFAR-100, we observed that the WordNet metric space resulted in the lowest error and has the best geometry.
**On baselines**
Our work introduces a new problem setting: adapting a pretrained classifier to enable it to navigate a metric space, without any additional training. Note that **this setting is highly challenging due to the extreme paucity of resources** available for it: we do not get to train or fine-tune models, or, in most cases, observe any new data points, and sometimes cannot even access model internals such as embeddings. Existing approaches (including the suggested ones) require access to information not available in our scenario or require training a specialized model. In fact, _our motivation for developing Loki was our initial skepticism that class geometry alone was sufficient to improve pretrained models to any extent_. Fortunately, our empirical and theoretical results demonstrate that such improvements are often possible.
The papers suggested do not fit our setting for the following reasons:
- The method proposed in [1] requires class embeddings, which are sufficient but not necessary to apply Loki, and it **requires training** whereas Loki does not.
- The zero-shot method proposed in [2] **requires training** a graph convolutional autoencoder and is limited to classes in ConceptNet.
- The method in [3] **requires attribute metadata**, which is different from our metric space assumption.
- [4] also **requires training**.
In contrast to these existing methods, **Loki is a drop-in replacement for argmax prediction used in pretrained classifiers**. As far as we are aware, [1, 2, 3, 4] cannot operate in this setting.
**However, we can evaluate alternatives to Loki** – Loki replaces the $\text{arg max}$ with the Fréchet mean. We can use a different function instead of the Fréchet mean, e.g., the Fréchet median. As described in the common response, we compared four alternatives to Loki in our new experimental results (presented in Table 2) and found that Loki was optimal for minimizing the expected squared distance.
**On graph neural network architectures being heavyweight**
By heavyweight, we mean that most other methods **require training, whereas ours does not**. Regarding the scaling to large graphs, we note that
- A forward pass using a graph convolutional network (GCN) [5] requires time complexity $O(|E|)$, where $|E|$ is the number of edges.
- A forward pass using Loki requires time complexity $O(K|V|)$, and if the number of observed classes, $K$, using Loki is small $(K << |V|)$, then the time complexity is $O(|V|)$.
For problems where the graph is dense or when we only have access to pairwise distances in a finite metric space instead of a graph, the time complexity of using a GCN becomes $O(|E|) = O(|V|^2)$. This is a common setting that occurs, for example, when the metric space is derived from pretrained class-name embeddings. However in this setting, the time complexity of applying Loki is still $O(|V|)$ when $K$ is small. We conclude that the complexity of applying Loki is favorable in many graph settings.
**On uncalibrated uncertainty**
This is also an excellent point, thank you for bringing this up! Since submission, we performed calibration via temperature scaling [6]. Excitingly, we found that calibration indeed led to improvements in our CLIP experiments, and that the optimal temperature for Loki was always close to the optimal temperature for calibration. We also found that adjusting the temperature allowed us to navigate a Pareto curve between optimizing for the 0-1 loss and for the expected squared distance, and in some cases we improved both. We are excited about these new insights, which can be found in Figures 1 and 2 of our new results.
**On the effectiveness of active selection**
Thank you for bringing this up! We have included additional experimental results showing how our active strategy improves over a passive baseline both in terms of performance (expected squared distance) and expressivity (the size of the locus). This new result can be found in Figure 3 of our new experiments!.
**On clarity**
We agree! We have updated the text to reflect this feedback. Thank you!
[1] https://arxiv.org/abs/1803.08035
[2] https://arxiv.org/abs/2012.06236
[3] https://ieeexplore.ieee.org/document/5206594
[4] https://ojs.aaai.org/index.php/AAAI/article/view/4843
[5] https://arxiv.org/abs/1609.02907
[6] https://arxiv.org/abs/1706.04599
---
Rebuttal 2:
Comment: The authors have addressed my concerns in the rebuttal. I would like to raise my rating from 4 to 5. I strongly encourage the authors to include the additional materials including temperature tuning, additional evaluation metrics, and active selection experiments in the final version.
---
Rebuttal Comment 2.1:
Comment: We will absolutely include these results in the final version, as we agree that these results have strengthened our work. Thank you for raising these points, and we appreciate your engagement with our rebuttal! Please let us know if there are any questions we can answer about our new results or anything else about the paper. Thanks! | null | null | null | null | null | null |
VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation | Accept (poster) | Summary: The paper proposes an efficient conditional 3D generation via voxel-point progressive representation. More specifically, a voxel semantic generator and a point upsampler have been created to achieve efficient generation on multi-category objects. To verify the effectiveness of the method, extensive experiments with SOTA results are achieved.
Strengths: 1. The paper is well-written and well-organized.
2. Extensive experiments are conducted, and impressive results are obtained.
Weaknesses: 1. It seems that the paper can address point upsample task instead of upsample completion task, since upsample refers to generate dense points while completion refers to synthesize new points given partial scans.
2. It would be better to show some ablation studies on the network architectures such as removing or replacing specific components to see how they affect the performances.
3. It would be clearer to indicate Tab .3 in line 234 for the corresponding quantitative results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. To reduce the degree of confidence when generating voxels because of the high masking rate, Gaussian noise is added to the prompt embedding.The question is instead of adding Gaussian noise, how about directly sampling smaller masking rate?
2. How is Eq. (2) obtained? It would be better to provide some hints on the calculation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable review.
**W1: The capability of VPP for partial completion task.**
Thank you for your suggestions, and we will make the necessary revisions to the pertinent statement. Furthermore, in **Figure 5** of the global response PDF, we show the partial generation results, which provide additional evidence to substantiate the efficacy of VPP.
**W2: More ablation studies on the components of the proposed VPP.**
- In **Figure 3** of the global response PDF, we present the visualization results of our proposed 3D VQGAN and Grid Smoother. It is observable that the VQGAN with $L_{occ}$ loss exhibits **enhanced reconstruction performance and adeptness in countering adversarial noise**. The Grid Smoother effectively accomplishes its role in voxel-point smoothing.
- Furthermore, we quantitatively evaluate the impact of specific components on performance, as delineated in the following table.
| | Acc | FID | IS |
| -------- | ------ | ----- | ----- |
| VPP w Locc | 88.04% | 29.82 | 10.64 |
| VPP w/o Locc | 84.19% | 35.75 | 9.98 |
| | Acc | FID | IS |
| ----------------- | ------ | ----- | ----- |
| VPP w Grid Smoother | 88.04% | 29.82 | 10.64 |
| VPP w/o Grid Smoother | 86.32% | 32.13 | 10.25 |
**W3: We have corrected all the typos.**
**Q1: How about directly sampling a smaller masking rate instead of adding Gaussian noise?**
Due to the cosine schedule generation steps during inference, we simulate various steps that may be encountered during inference by sampling from the arccos distribution. Therefore, the mask ratio during training is **not a constant value**, and it is necessary to sample high mask ratios as well. This is because the inference process starts with a 100% [mask token].
**Q2: Some hints on the calculation of Eq.(2).**
The approach of infusing adaptive Gaussian noise into cross-modal features originates from [1], wherein noise parameter λ directly influences the generated samples X to **modulate the generative dependence**. The authors derived and demonstrated this by employing the cumulative distribution function of the inner product of random vectors on a sphere, along with the utilization of the Gamma function.
[1] Towards language-free training for text-to-image generation, CVPR 2022
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the clarification in the rebuttal, and my concerns are addressed.
---
Reply to Comment 1.1.1:
Title: Thanks for your recognition of our work
Comment: Dear Reviewer U3Fo,
We sincerely appreciate your recognition of our work, and we're pleased that your concerns have been resolved! Your valuable suggestions significantly contributed to our work. | Summary: This work proposes to use a voxel-point progressive representation for efficient 3D generation, and it proposes a few architectures for different applications, including generation, editing, upsampling, and pre-training. Based on the reported results, the proposed method could generate various 3D shapes and could achieve competitive classification results.
Strengths: 1. The work proposes a voxel-point progressive representation, which could provide good 3D generation results on the ShapeNet dataset as shown in the reported experiments.
2. Many modules, such as 3D VQGAN, Voxel Semantic Generator, Grid Smoother, and Point Upsampler, have been proposed.
Weaknesses: 1. The proposed method could only generate 3D shapes belonging to the categories of the ShapeNet dataset. It does not show any 3D shape results from unseen categories even with the help of CLIP.
2. Point-Voxel representation has been broadly studied in previous methods, such as [A-C], which has already been proven to be an efficient representation for 3D point cloud analysis.
3. The classification results for both the ScanObjectNN and ModelNet40 datasets are very saturated. Also, the improvements seem to be very incremental.
[A] Liu, Z., Tang, H., Lin, Y., & Han, S. (2019). Point-voxel cnn for efficient 3d deep learning. Advances in Neural Information Processing Systems, 32.
[B] Zhang, C., Wan, H., Shen, X., & Wu, Z. (2022). PVT: Point‐voxel transformer for point cloud learning. International Journal of Intelligent Systems, 37(12), 11985-12008.
[C] Liu, Z., Tang, H., Zhao, S., Shao, K., & Han, S. (2021). Pvnas: 3d neural architecture search with point-voxel convolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 8552-8568.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Is the proposed method capable of generating 3D shapes of novel categories?
2. Is the proposed method capable of generating complete point clouds given a partial point cloud?
3. Does the proposed method demonstrate robustness when presented with noisy point clouds as input?
4. Is the generated 3D shapes caused by overfitting the ShapeNet dataset? Maybe the authors could try to retrieve the most similar shape in the ShapeNet dataset to prove its generation ability.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable review.
**W1: Unseen categories generation.**
- Differing from NeRF-based approaches like DreamFusion, although they can achieve open vocabulary zero-shot generation, the high computational time and training costs make practical utilization challenging. Our method strikes a balance among **multi-category generation, generation efficiency, and generation quality**, and is able to perform **multiple 3D downstream tasks** including conditional generation, editing, completion, and pre-training.
- VPP trained on ShapeNet is incapable of achieving open vocabulary zero-shot generation, but it can produce novel categories to some extent. Furthermore, by employing a larger dataset Objaverse[1], VPP demonstrates the capability to generate more common objects. These results are illustrated in **Figure 2&6** of our global response PDF. Due to computational resource constraints, our model was trained for only 50 epochs on Objaverse. The current results represent preliminary findings.
**W2: The correlation between the previous point-voxel representation and VPP.**
- The previous point-voxel methods were predominantly architectural endeavors, aimed at achieving improved classification, detection, or segmentation performance.
- VPP is the first endeavor that achieves efficient conditional 3D generation through the sharing of distinct representation advantages. We express our gratitude to the preceding point-voxel representation methodologies, and in the appendix, we will incorporate an Additional Related Work section to acknowledge and cite pertinent contributions.
**W3: Improvement of classification results.**
- Our Point Upsampler is implemented based on Point-MAE and achieved a 4.1 % performance improvement on ScanobjectNN. Our model achieves SOTA performance on the benchmark that only uses ShapeNet point clouds as pre-trained data.
- Some recent work, such as I2P-MAE[2] and ReCon[3], achieved better performance through cross-modal tutors and cross-modal data.
**Q1: Capability of generating 3D shapes of novel categories.**
We conduct novel-categories generation experiments, which demonstrated the capability to generate novel-categories shapes, such as "a car boat". The results are presented in **Figure 6** of the global response PDF.
**Q2: Capability of partial point cloud generation.**
Thank you for your suggestion. We generate partial data by employing a block mask on the original point clouds. The results are depicted in **Figure 5** of the global response PDF, illustrating the partial generation capability of VPP. Furthermore, the generated samples exhibit diversity, providing further evidence of the performance of VPP.
**Q3: Robustness of VPP when presented with noisy point clouds as input.**
- As depicted in **Figure 3** of the global response PDF, our model demonstrates the capacity to not only faithfully reconstruct the provided input but also proficiently rectify distinct imperfections and noise inherent within the input data.
- In addition, we quantitatively compare the robustness of the model to the training data. The table below shows the changes in text-conditional generation metrics after we added random scale, translate, and jitter disturbances to the input point cloud.
| | disturbance | Acc | FID | IS |
| ---- | ----------- | ------ | ----- | ----- |
| VPP | - | 88.04% | 29.82 | 10.64 |
| VPP | random scale | 87.63% | 31.20 | 10.15 |
| VPP | random translate |83.76% | 36.92 | 8.71 |
| VPP | random jitter | 87.29% | 30.68 | 10.44 |
**Q4: Are the generated 3D shapes caused by overfitting the ShapeNet dataset?**
Thank you for your suggestion. We conduct a retrieval evaluation on samples generated by VPP and the ShapeNet dataset. The results are shown in **Figure 8** of the global response PDF. There are no completely identical samples, therefore VPP is not overfitting the ShapeNet dataset. The results generated by VPP are more like an understanding and integration of shape knowledge.
[1] Deitke M, Schwenk D, Salvador J, et al. Objaverse: A universe of annotated 3d objects. CVPR 2023
[2] Zhang R, Wang L, Qiao Y, et al. Learning 3d representations from 2d pre-trained models via image-to-point masked autoencoders. CVPR 2023
[3] Qi Z, Dong R, Fan G, et al. Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining ICML 2023
---
Rebuttal Comment 1.1:
Title: Further Discussion
Comment: Dear Reviewer THuC,
Thanks again for your valuable comments and suggestions! The Author-Reviewer discussion is coming to an end, and we hope that we have addressed all of your concerns. Please, let us know if you have any follow-up questions or concerns. We will be happy to answer them.
Best Regards,
Authors | Summary: Authors propose an approach to generate 3D point clouds of objects with an image or text description as input. Authors use a pre-trained CLIP model to generate text/image embeddings and use this to first generate features in voxel space (Voxel Semantic Generator). These voxel features are then decoded into a coarse voxel grid. Authors then convert the voxels to point clouds and use a smoothing network to obtain a uniform point cloud. Authors use a transformer to then convert this point cloud into a detailed 3D shape (also point cloud).
Authors show qualitative and quantitative comparison with relevant baselines (CLIP-Sculptor, CLIP-Forge and Point-E).
Authors also show several applications like text and image conditioned 3D generation, shape editing and completion.
Strengths: + Ideas presented in the work are technically sound.
+ Presented results outperform competing baselines.
+ Authors submitted code. Although I did not run it but this is still appreciated. Code will also be released upon acceptance.
+ Paper is mostly well written. See comments below for minor improvements.
Weaknesses: [Technical]
1. How is the proposed Voxel Semantic Generator different than “CLIP-Conditioned Coarse Transformer” from CLIP-Sculptor? They are essentially doing the same thing, 1. Learn a voxel based encoding of 3D shapes and 2. Use a transformer to learn how to unmask the 3D features conditioned on a prompt.
Can authors clarify this better?
2. L175-178: Since the GT for Grid Smoother is generated using furthest point sampling, why can’t we use FSP at inference time to smoothen the coarse voxel grid from the 3D VQGAN decoder?
Why do we need to learn a neural network?
How useful is adding KL loss over Chamfer loss?
3. L180: The output of the grid smoother is a point cloud (I assume this because it is trained with a Chamfer loss). L169 mentions that “point tokens” are masked which are then unmasked by the transformer. What exactly are the “point tokens”? Is it just the positional encoding of points?
Fig.2 (b) mentions “semantic tokens” what does this mean? It is not explained in Sec 3.3 (Point Upsampler).
4. Eq. 1: How useful is the occupancy loss over MSE? Is this critical? Please add an ablation study in supp. mat. to support the proposal.
[Minor]
- Fig. 2: Please add symbols used in the text to Fig.2. This makes it easier to follow the text and map various components of the pipeline.
- Add dimensionality of each embedding/latent code along with the symbols, eg: C_{pr} \in \mathbb{R}^{<whatever>\times<whatever>}. This significantly improves clarity and readability.
- L120: How is the mapping performed? How is the codebook generated? If an existing work is used, please add citation here? This is present in the supp. mat. to an extent. Add a pointer so that the reader knowns where to look.
- L140: Just curious, how important is the scheduling of the mask fraction?
L147: Is there significant performance difference at inference time between direct and multi-step prediction?
- L234: Table number is missing.
- Please avoid violent objects like guns for showing qualitative results. It is understandable that it is sometimes necessary to demonstrate performance but whenever possible, let us try to avoid this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall I'm positive about the work. There are some concerns about the novelty/necessity of some components (see above) and it would be great if authors can clarify these.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Discussion on Limitations and Broader Impact is not included in the main paper but it is present in the supp. mat. Authors are encouraged to add some discussion on limitations and future work in the main paper as it allows the community to better use and build upon your research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable review.
**W1: Difference between Voxel Semantic Generator and CLIP-Sculptor?**
- Our goal is to share the representation advantages of both voxels and points. The objective of the Voxel Semantic Generator is to provide positional encoding for the Point Upsampler. In contrast, CLIP-Sculptor entirely uses voxels as representation. The transformer sequence length is cubically proportional to the voxel resolution, resulting in slower inference speeds at high resolutions.
- Additionally, VPP employs a **meticulously designed 3D VQGAN** to generate codebooks rather than VQVAE. VPP utilizes a mask ratio based on the **arccos distribution sampling** method, rather than a two-step unrolled training loss.
- During inference, we use the **Central Radiative Temperature Schedule strategy** to get better performance.
**W2: Why do we need the proposed Grid Smoother instead of FPS?**
FPS cannot serve as a substitute for the proposed Grid Smoother module. Its objective is to **mitigate the representation gap** as possible. This is because the point cloud generated by the Voxel Semantic Generator is **discrete and grid**. Through a neural network, we obtain **continuous and uniformly distributed point clouds**. The effect of grid smoothing is demonstrated in **Figure 3** of the global response PDF. In contrast, the nature of FPS is downsampling, rendering it incapable of producing continuously valued points.
Furthermore, FPS leads to a reduction in point quantity. Typically, our Voxel Generator can only produce between 200 to 500 points. Employing FPS downsampling would result in the loss of certain geometric information.
**W3: The explanation of "point tokens" and "semantic tokens".**
We employ a pre-trained Point-MAE encoder as the tokenizer of PointUpsampler to generate semantic point tokens. The point cloud generated by the grid smoother is utilized to provide positional encoding for these semantic tokens. We provide a more comprehensive explanation of this aspect in the revised version.
**W4: How useful is the occupancy loss over MSE?**
Thanks for your suggestion! We have incorporated this result into **Figure 3** of the global response PDF. As depicted, the addition of the $L_{occ}$ loss significantly improves the reconstruction performance compared to the vanilla VQGAN. This improvement not only faithfully restores the voxels but also effectively **corrects specific imperfections and noise** in the input data.
Furthermore, the following table presents a quantitative performance comparison regarding the $L_{occ}$ loss.
| | Acc | FID | IS |
| ------------ | ------ | ----- | ----- |
| VPP w Locc | 88.04% | 29.82 | 10.64 |
| VPP w/o Locc | 84.19% | 35.75 | 9.98 |
**Minor**
- We greatly appreciate all the constructive suggestions you have provided. We will incorporate symbols in the figures, add pointers, and correct typos, which will significantly enhance the readability.
- Concerning the scheduling of the mask ratio, MaskGit[1] has conducted experiments on the scheduling strategy of mask fraction for parallel decoding. In this study, we adopt the optimal strategy.
- In the ablation study Figure 8 of the main paper, we observed that a moderate number of inference steps (4/8) yields the best generative performance, while single-step generation leads to a decrease in accuracy by up to 6%.
[1] Chang H, Zhang H, Jiang L, et al. Maskgit: Masked generative image transformer. CVPR 2022
---
Rebuttal Comment 1.1:
Title: Post rebuttal update
Comment: Thanks authors for the rebuttal. It addresses my concerns and I keep my positive rating of the work.
---
Reply to Comment 1.1.1:
Title: Thanks for your recognition of our work
Comment: Dear Reviewer YMLb,
Thanks for keeping your positive rating of the work. Your constructive advice helps a lot to improve our work! | Summary: The VPP proposes a model for 3D generation. It utilizes both point-based and voxel-based representations. Voxel-based representations are used to generate the coarse tokens, and the point-based one further improves the result. Both of which are pretrained with MAE-like self-supervised method.
The proposed method applies to image-to-point, text-to-point, and point completion. The propsed method is novel and effective. However, some parts are not presented clearly.
Strengths: 1. The limitations of existing methods are well analyzed.
2. The proposed method is carefully designed and performs well.
3. The proposed method applies to more downstream tasks than exsiting methods.
4. The experiments show the effectiveness of the proposed method.
Weaknesses: 1. The inference efficiency in Table 1 only presents hours or seconds, it is recommended to provide the exact time.
Totally, the paper is not well presented. Some details are not shown clearly. To be specific,
2. The tokenizer for points upsampler is not described. What is it structure and how to train it?
3. It would be better to refer to some important symbols in Figure2. For example, prompt embedding in Figure 2 should also be marked as C_pr.
4. The GAN's structure and loss in 3D VQGAN are not introduced.
5. As described in Section 3.2, the mask voxel transformer will be forwarded multiple times for better quality. It should also be commented in Figure 2 or 3 for better presence.
Some typos:
6. In about line 151: "when the inputs of the mask voxel transformer are grid tokens with high masking fraction and prompt embedding we find that this will lead to the semantics of the predicted grid tokens being much more biased toward the prompt embedding". I think the authors may miss punctuations.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. I am not sure what parallel decoding in Figure 3 means.
2. When training the voxel semantic transformer, part of voxel tokens are masked, as shown in Figure 2a, and the masking ratio is about 64%. While all tokens are masked voxel tokens during inference. Is there a gap between training and inference?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable review.
**W: Some presentation suggestions. The unclear explanation of tokenizer and 3D VQGAN.**
- We appreciate your suggestions regarding the presentation. In the revised version, we include **the exact time in Table 1**, incorporate **symbols** into the process diagram, and emphasize multi-step decoding in Figure 2&3 to enhance readability. Additionally, we correct all the typos.
- About the tokenizer of the Point Upsampler, we employ a **pre-trained Point-MAE** encoder as the tokenizer to generate semantic point tokens. We will elaborate on it in the subsequent revised version.
- The structural diagram of 3D VQGAN is shown in **Figure 4** of the global response PDF. We have described the proposed $L_{occ}$ loss in the main paper and demonstrated the training pipeline of 3D VQGAN in the supplementary materials. We will further enrich the explanation of the VQGAN loss in the revised version.
**Q1: The meaning of parallel decoding.**
Parallel decoding is introduced to distinguish it from sequential autoregressive decoding. In each step of parallel decoding, the model concurrently predicts all tokens in parallel, conditioned on the probability distribution from the previous step. Parallel decoding constitutes one of the crucial factors enabling VPP to achieve efficient generation.
**Q2: Is there a gap between training and inference on the mask ratio?**
VPP employs a cosine schedule during parallel decoding. Specifically, at each inference step:
- The mask transformer predicts the logits of the codebook for all masked grid tokens.
- Subsequently, we employ the cosine schedule to calculate the mask ratio at the current step and determine which tokens are to be fixed based on the logits scores.
- The non-fixed tokens are replaced with mask tokens and proceed to the next step.
Due to the cosine schedule of mask ratio during inference, we sample mask ratios from the arccos distribution during training to simulate various inference steps. The mean mask ratio is 0.64, yet it **doesn't mean a fixed mask ratio**.
---
Rebuttal Comment 1.1:
Title: Further Discussion
Comment: Dear Reviewer A5WF,
Thanks again for your valuable comments and suggestions! The Author-Reviewer discussion is coming to an end, and we hope that we have addressed all of your concerns. Please, let us know if you have any follow-up questions or concerns. We will be happy to answer them.
Best Regards,
Authors | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for your valuable feedback that significantly contributed to our work. VPP achieves efficient, multi-category, high-quality conditional generation through voxel-point progressive representation, and is capable of performing various tasks such as editing, completion, and pre-training.
We present additional experimental results in the following PDF to offer a more intuitive and clear response to the reviewer's relevant comments and questions. Moreover, we further demonstrate the optimized mesh generation performance and the generation results using a larger dataset. Specifically, this PDF includes the following:
Figure 1: Improved surface reconstruction results through SAP.
Figure 2: Visualization of text-conditioned generation results using Objaverse dataset.
Figure 3: Visualization results of 3D VQGAN and Grid Smoother.
Figure 4: Structure diagram of the 3D VQGAN.
Figure 5: Generation results of partial inputs.
Figure 6: Novel categories generation results.
Figure 7: Ablation experiment that only uses point clouds as representation.
Figure 8: Ablation experiment that retrieval ShapeNet samples.
Due to space limitations, we employ a two-column layout.
Pdf: /pdf/7deca931f62331aa66b1ab9cb6ecf381fda42291.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposed a text-driven point cloud generation model that can be used for various downstream tasks such as generation, editing,
completion and pretraining, while being very efficient. The method largely follows Muse [2], but adapted it to 3D point clouds. The model consists of multiple components, which are trained individually. First, a VQGAN is trained on the voxelized shapes, which embed a shape to a latent grid. Next, a masked transformer is trained on the latent grid to model its distribution, using CLIP feature as semantic conditioning. For the decoding stage, a grid smoother network moves the voxel centers to more homogeneous locations, while finally another set of point cloud VQVAE and masked transformer is trained to upsample the coarse point cloud to high resolution point cloud. The paper also demonstrated that the proposed components can be used in whole or in part to tackle a wide range of tasks.
Strengths: * The proposed method is able to handle a wide range of tasks such as generation, editing, completion and pretraining.
* VPP significantly outperforms baseline methods on the main task of shape generation.
* The proposed method is computationally efficient compared to optimization-based methods such as DreamFusion.
Weaknesses: * The components of the proposed method are not new -- they are mostly borrowed from previous works such as CLIP, MUSE and Point-MAE.
* The method contained a large number of stages, making it potentially difficult to implement or improve upon.
* Unlike diffusion-based methods such as DreamFusion, Dream3D, Magic3D, the proposed method does not generate surface or texture. It is also not clear if the proposed method is able to do zero-shot generation like these methods.
* The proposed method is only trained on ShapeNet, which is relatively small and less diverse compared to larger datasets such as Objaverse. It is not sure if the method is able to scale to larger datasets.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What is the benefit of using voxel as intermediate representation compared to directly encoding the point cloud?
* Missing table reference on L234.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations and societal impacts are adequately addressed in the supplemental material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable review.
**W1: Are the components of the proposed method new?**
We have introduced some novel structures or strategies to adapt to 3D generation. Such as the **3D VQGAN with occ loss**, the **Grid Smoother** to smooth the gap between two representations, and the **Central Radiative Temperature Schedule** strategy during inference.
Our goal is to share the merits of different representations for efficient multi-category generation. It is natural to draw inspiration from Muse for voxels and Point-MAE for points.
**W2: Is VPP difficult to implement or improve upon?**
- It is a common phenomenon to use three stages in the generative model. For example, Muse needs to train VQGAN, Base Transformer, and SuperRes Transformer before inference.
- Furthermore, each module of VPP is based on CNN or Transformer, which is easy to implement. Modifying any stage could potentially contribute to improvements in the overall performance.
**W3&4: Can VPP generate surface and do zero-shot generation? How about VPP trained on a larger dataset like Objaverse?**
- First, we present the **surface reconstruction** results based on SAP in **Figure 1** of the PDF we provide in the Global Response. It demonstrates that VPP is capable of generating smooth surfaces.
- Second, VPP trained on ShapeNet can not achieve open vocabulary zero-shot generation. But it can produce some **novel categories** to some extent. The results are shown in **Figure 6** of the provided PDF.
- Third, by using a larger dataset **Objaverse**, VPP is capable of generating more common objects. We present the results in **Figure 2** of the provided PDF. Due to computational resource constraints, our model was trained for only 50 epochs on Objaverse. The current results represent preliminary findings.
**Q1: What is the benefit of using voxel as an intermediate representation compared to directly encoding the point cloud?**
- We analyze the characteristics of different representations in the introduction. Due to significant shape differences across various categories, the structured and explicit positional information provides direct **spatial cues**, thereby aiding the generalization of multiple categories. This observation aligns with previous multi-category generation methods [1-2].
- Additionally, point clouds possess continuous and sparse semantic information and heavily rely on positional encoding [3]. The point token represents **both local geometric semantics and positional information**, which significantly reduces the generation performance in complex multi-category scenarios. We show the results for directly encoding the point cloud in Figure 7 of the global response PDF. As observed, models trained without voxel representation exhibit inferior performance in generating object details compared to our current approach, particularly for generating objects with intricate structures, such as "an airplane".
**Q2: We have corrected all the typos.**
[1] Sanghi A, Chu H, Lambourne J G, et al. Clip-forge: Towards zero-shot text-to-shape generation. CVPR 2022
[2] Sanghi A, Fu R, Liu V, et al. CLIP-Sculptor: Zero-Shot Generation of High-Fidelity and Diverse Shapes From Natural Language. CVPR 2023
[3] Pang Y, Wang W, Tay F E H, et al. Masked autoencoders for point cloud self-supervised learning. ECCV 2022
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I would like to thank the authors for the clarification and the extra experiment on Objaverse. I have raised my rating as all of my concerns are resolved. The reason that prevents me from giving any higher rating is that, despite the model seems to be very powerful, its zero-shot generation capability appears to be very limited, which will limit its usefulness.
---
Reply to Comment 1.1.1:
Title: Thanks for your recognition of our work
Comment: Dear Reviewer AZ71,
We sincerely appreciate your recognition of our work and insightful comments. We will continue to explore training on large datasets to improve the zero-shot capability of VPP and include all new discussions and results in the revised version. | null | null | null | null | null | null |
Causal Discovery from Subsampled Time Series with Proxy Variables | Accept (poster) | Summary: The authors suggest a method of causal discovery for multivariate time series under the regime when the data is being sampled at constant skips in the time dimension. Under mild assumptions, they prove that their method works asymptotically.
Strengths: The paper's contribution is clear, the methods are interesting and novel, and the subject (causal inference in multivariate time series) is of interest to many. Other than some places I will mention, the explanations are clear.
Weaknesses: I would like to see a little more discussion on previous work. I see citations about work that inspired the current one, for example [17, 19, 25] on using descendants of an unobserved variable to differentiate direct causation from hidden mediation, but I do not see citations for competing methods for causality under the regime of sampling on the time dimension. It would be good to be explicit about what the current state of the art is in that direction, and how the current work compares. You mention them in Section 5 (SVAR-FCI, NG-EM, Dynotears, PC-GCE) but I'd like to see a bigger discussion on the differences. Specifically, it would be great if we had a basic example in mind where "naive" approaches clearly fail, and make us understand how the proposed method avoids the issue intuitively.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. I feel like Definition 2.3 can be phrased more simply. The way you wrote it makes it look circular.
2. I am a little confused by the figures. In Figure 2.c., I don't understand why there wouldn't be an arrow from A(t_3) to M(t_5). Or perhaps the faded arrows count? Why is there a difference between faded and non-faded arrows?
3. In Figure 3, where are some vertices faded and some not? Is there a difference? It looks like it is meant that faded means unobserved, but I don't think that's what you meant.
4. Put Remark 4.2 outside of the algorithm.
5. Line 224: random is misspelled.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive assessment and valuable suggestions on our paper. We will modify the manuscript accordingly.
**About related works.** The existing methods have been discussed in lines 30-36. To summarize, identifiability is only achieved in linear data. As for nonlinear data, only a small part of the causal information (i.e., an equivalence class) can be identified.
**About the example.** Please refer to Fig. 4 for the illustration example. In particular, naive approaches can not distinguish between the direct effect $A\to B$ and the indirect one $A\to M\to B$, due to the latent mediator $M(t_2)$. Our method solves this problem by using $M(t_3)$ as the proxy variable of $M(t_2)$.
**About figures.** For both Fig. 2c and Fig. 3, there is no difference between faded and non-faded arrows/vertices. We will modify these figures as suggested.
---
Rebuttal Comment 1.1:
Comment: Thanks, I will keep the rating. | Summary: # Summary
In this paper, the authors address the problem of inferring causal structures from subsampled time series data, where the frequency of measurement is much lower than that of causal influence. This presents challenges in identifying the causal structure, as hidden variables at unobserved time steps can induce bias. Existing methods that tackle this problem are limited to linear cases or fail to achieve identifiability.
The main contribution of this paper is a constraint-based algorithm that can identify the entire causal structure from subsampled time series without any parametric constraints. The authors propose a proxy-based causal discovery algorithm that leverages the temporal structure of time series data to remove the bias induced by hidden variables. The algorithm is nonparametric and can achieve full causal identification. Specifically, the author leverages the proxy variables to test the edge directions of the summary DAG from the uniquely identified MAG.
The authors demonstrate the theoretical advantages of their method and provide experiments on both synthetic and real-world data, showcasing improved performance over existing methods.
Strengths: # Originality
The paper presents a novel approach to causal discovery in subsampled time series data by proposing a constraint-based algorithm that leverages proxy variables to handle the challenges posed by hidden variables at unobserved time steps. The originality of the method lies in its ability to identify the entire causal structure without any parametric constraints, setting it apart from existing methods that are limited to linear cases or fail to achieve full identifiability. The authors draw inspiration from the recent progress in proximal causal discovery and adapt it to the subsampled time series setting.
# Quality
The quality of the paper is high, as it presents a well-formulated methodology with solid theoretical foundations. The authors provide rigorous proofs for their proposed algorithm's identifiability properties, ensuring that the algorithm is grounded in strong theoretical underpinnings. Although I have some questions regarding its generality and required assumptions, I will elaborate about them later.
As for the experiments, the author conduct 1 synthetic and 1 real-world experiment, which demonstrates its effectiveness against the baselines.
# Clarity
The paper is well-written and clear in its presentation. The methodogy is clear, thanks to the intuitive explanation provided by the authors. They also provide necessary background information on causal discovery and related literature.
# Significance
The problem that this paper aims to talks is significant in the field of tmeproal causal discovery. Although I don't think this paper fully addresses it, it made a reasonable progress towards the final destination.
Weaknesses: # Weakness
Although the paper provide a solid progress towards the final destination, here are some limitations I am a bit worried about:
## Assumptions are too strong:
One assumption is that only 1 step window is considered in this paper. However, is this true in practice? Is it possible that some factors has long-standing effect (higher-order Markovian) to the target variable? Can your method handle such scenario?
For summary DAG, does it has to be a DAG? Since from general full-time DAG, the summary graph may not be a DAG and can contain cycles. Does your method handle such cases? For example, A(1) -> B(2) and B(1) -> A(2), and the summary graph is A<->B.
What are the consequence if some variables are not self-caused? That means the proxy variables may not exists. In that case, full identifiablity cannot be establish, right? You may want to add something like to what extend of identifiability your method can establish without such assumption.
## Comparison with baselines
The author includes some of the well-known baselines. Surprisingly, Dynotears performes reasonably ok considering that it is a linear model (in its original form). However, after Dynotears, several state-of-the-art baselines are proposed, like Rhino [1]. Can you compare your method with this nonlinear model?
[1]Gong, Wenbo, et al. "Rhino: Deep Causal Temporal Relationship Learning With History-dependent Noise." arXiv preprint arXiv:2210.14706 (2022). Code at https://github.com/microsoft/causica/tree/v0.0.0
## Scalability
How expensive it is to run such algorithm, since you need to test every pair. For real-world experiment, you scale it to 90 variables, how long does it need to run it? If your method can be scaled to higher-order SVAR, how does it scales with the window size?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In figure 4 proxy model, do you mean $M(t_2)$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author did discuss its limitations but they only discuss the limitations of conditional independence test. Howver, in the weakness section, I have raised several improvements the author can consider. I suggest including these discussions in the limitations as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the highly constructive feedback and thoughtful suggestions on our paper. We address your concerns below.
**About assumptions**:
We first would like to point out that our assumptions, such as the first-order SVAR assumption and the self-causation assumption are commonly adopted in the literature [10-15,26-27].
**Q1.** One assumption is that only 1 step window is considered in this paper. However, is this true in practice? Is it possible that some factors has long-standing effect (higher-order Markovian) to the target variable? Can your method handle such scenario?
**A**: The first-order Markov model offers a practical approximation for numerous real-world situations. This includes scenarios like Reinforcement Learning tasks based on the Markovian Decision Process, neural activities in the brain [Valdes-Sosa et al. 2004], epidemic spreading [Anderson et al. 1991], and economic growth [Orcutt et al. 1969].
The extension to the higher-order settings is not a trivial task, as longer causal chains may induce the bias in a much more complex way than shorter ones. We plan to address these cases in future work.
[1] Valdes-Sosa PA. Spatio-temporal autoregressive models defined over brain manifolds. Neuroinformatics. 2004.
[2] Cohen JE. Infectious diseases of humans: dynamics and control. JAMA. 1992.
[3] Orcutt GH, Winokur Jr HS. First-order autoregression: inference, estimation, and prediction. Econometrica. 1969.
**Q2.** For summary DAG, does it has to be a DAG? Since from general full-time DAG, the summary graph may not be a DAG and can contain cycles. Does your method handle such cases?
**A**: Indeed, the summary graph can contain cycles. Our method can handle this case since it does not influence the identifiability results. We will modify the manuscript accordingly.
**Q3.** What are the consequence if some variables are not self-caused? That means the proxy variables may not exists. In that case, full identifiablity cannot be establish, right? You may want to add something like to what extend of identifiability your method can establish without such assumption.
**A**: Indeed, for this case, full identifiability cannot be established. What we can identify are ancestral equivalence classes [Plis et al. 2015]. We will supplement this discussion to the manuscript as suggested.
[1] Plis S, Danks D, Freeman C, Calhoun V. Rate-agnostic (causal) structure learning. NeurIPS. 2015.
**Comparison with Rhino**:
Our method can outperform Rhino for about 30% in the f1-score on the synthetic dataset. Please refer to the supplementary PDF for details.
**About scalability**:
First note that we do NOT need to test every pair because most of the pairs can be screened out in step-3(a) with the necessary condition.
It takes about 20 hours to run the real-world experiment. This time cost is acceptable considering that many of the baselines, e.g., [10,11,17], can not produce results in a feasible amount of time.
To further validate the scalability of our method, we provide more experimental results in the supplementary PDF.
**About Fig. 4**. Indeed, we mean $M(t_2)$. This is a typo.
---
Rebuttal 2:
Comment: Thanks for the authors' effort on addressing my concerns. They managed to addressed most of my concerns. But still, longer dependencies (longer than window 1) can still be present in practice, and this is a strong assumption in my personal opinions. I will keep my current score. | Summary: This paper proposed a non-parametric constraint-based algorithm that can identify the entire causal structure from subsampled data, which leverages the proxy variable to adjust the bias induced by the hidden variable.
Strengths: - Concise and clear theoretical derivation. Introduce the proposed method by discussing the connections between different graphs step by step.
Weaknesses: - The paper is rather incremental as the core of the method is based on [19] to use proxy variables to detect and eliminate the confounding effect brought by the subsampling.
- Moreover, the method proposed by [19] seems to require more assumptions but is not disclosed in this paper. And it is unclear whether those specific assumptions can be satisfied in this work.
- In addition, this work assumes the self-exciting property of the time-series which is not necessarily held and it is interesting to see how to find the proxy variables and what would be the result in this case.
- It could be better to provide more than 5 vertex numbers in the experiment.
- What would be the result if the causal graph becomes denser?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts on our paper. We address your concerns below.
**Q1.** The paper is rather incremental as the core of the method is based on [19] to use proxy variables to detect and eliminate the confounding effect brought by the subsampling.
**A**: First note that our paper solves a very different problem from [19]. Our method is the first to achieve nonparametric causal identification in subsampled time series, setting it apart from existing methods that are limited to the linear case.
Moreover, at the cores of our method lie the formulation of the subsampling bias (Def. 3.2, Prop. 3.3) and the identification of the separation set (Thm. 3.5 (1)). These analyses serve as the foundation for our use of proxy variables.
**Q2.** Moreover, the method proposed by [19] seems to require more assumptions but is not disclosed in this paper. And it is unclear whether those specific assumptions can be satisfied in this work.
**A**: [19] requires the smoothness of the structural equation and the invertibility of the transition matrix. All of them can be satisfied in our work.
Due to space limits, we in Asm. 2.6 used the shorter version of their assumptions (Exam. 4.4, [19]). We will supplement the full assumptions to the manuscript as suggested.
**Q3.** In addition, this work assumes the self-exciting property of the time-series which is not necessarily held and it is interesting to see how to find the proxy variables and what would be the result in this case.
**A**: As stated in Rem. 2.11, the self-causation assumption is commonly used in time series. For cases where this assumption does not hold, no immediate guarantee of finding suitable proxies can be given. We will work on this in future work.
**Q4.** It could be better to provide more than 5 vertex numbers in the experiment. What would be the result if the causal graph becomes denser?
**A**: Our method is consistently accurate under different scales and densities. Please refer to the supplementary PDF for details. | Summary: In this paper, the author(s) propose a new technique to learn the summary graph of time-series data. As a motivation for their work, the author(s) discuss the interesting application of learning causal pathways in Alzheimer’s disease. The time series model studied in this work is quite general, and suitable for many applications. The author(s) provide a simple algorithmic approach to learn the summary graph. This algorithm is essentially based on verifying d-separation, by testing conditional independence. The author(s) conclude with an experimental comparison, showing that their method achieves superior performance than a baseline, using various synthetic datasets and a real-world medical application.
Strengths: This paper studies an very important and difficult problem. The application of learning causal pathways in Alzheimer’s disease is very relevant. The paper is also well-written and easy to follow.
Weaknesses: My main concern pertains the faithfulness assumption. Faithfulness it is useful for getting identifiability results, and it allows to recover the causal structure by testing conditional independence. However, in many scenarios there is no practical reason to assume faithfulness. The use of faithfulness significantly limits the novelty of their work. If I understand correctly, the implementation of their algorithm essentially uses conditional independence to recover the causal structure. Testing conditional independence is practically problematic. Hence, I also have doubts on the scalability of their method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you please describe exactly step 1 in your algorithm? How do you test conditional independence?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the author address limitations in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts on our paper. We address your concerns below and hope this can help you re-evaluate our paper.
**Q1.** My main concern pertains to the faithfulness assumption. Faithfulness is useful for getting identifiability results, and it allows for recovery of the causal structure by testing conditional independence. However, in many scenarios, there is no practical reason to assume faithfulness. The use of faithfulness significantly limits the novelty of their work.
**A**: Our method can work under the v-adjacency-faithfulness assumption [1], which requires that any two variables connected by an inducing path in the causal structure are dependent given any conditioning set. This assumption is strictly weaker than strong faithfulness and can handle common counterexamples such as the unshielded collider.
The faithfulness assumption or its weaker variations have been widely utilized in constraint-based and score-based causal discovery approaches. This can be justified by the mathematical insignificance of unfaithful distributions (a lower-dimensional plane in a higher-dimensional space) [2-4]. Furthermore, a recent investigation [5] indicates that any causal learning method deemed effective must adhere to the standard practice of incorporating algorithms under the faithfulness condition. This highlights the essential nature and significance of the faithfulness condition.
[1] Zhang J, Eberhardt F, Mayer W, Li MJ. ASP-based discovery of semi-Markovian causal models under weaker assumptions. IJCAI. 2019.
[2] Meek C. Strong completeness and faithfulness in Bayesian networks. UAI. 1995.
[3] Spirtes P, Glymour CN, Scheines R. Causation, prediction, and search. MIT press. 2000.
[4] Uhler C, Raskutti G, Bühlmann P, Yu B. Geometry of the faithfulness assumption in causal inference. The Annals of Statistics. 2013.
[5] Lin H, Zhang J. On learning causal structures from non-experimental data without any faithfulness assumption. Algorithmic Learning Theory. 2020.
**Q2.** Testing conditional independence is practically problematic. Hence, I also have doubts on the scalability of their method.
**A**: Our method is scalable to larger sizes of the causal graph. For instance, it takes only 3 hours to run a simulation experiment with 45 variables (see Fig. 1 in the supplementary PDF).
In contrast, the baselines [10-12] (functional causal model-based), which use the EM algorithm for iterative estimation, cannot produce results in a feasible amount of time; The baseline Rhino [Gong et al. 2023] (score-based), which involves a double-nested optimization, requires up to 6-8 hours of training to converge.
To summarize, we believe that scalability is a common challenge for causal discovery methods, not just constraint-based ones. Addressing this issue is beyond the scope of this paper.
[1] Gong W, Jennings J, Zhang C, Pawlowski N. Rhino: Deep causal temporal relationship learning with history-dependent noise. ICLR. 2023.
**Q3.** Can you please describe exactly step 1 in your algorithm? How do you test conditional independence?
**A**: As mentioned in line 220, we use the Fast Causal Inference (FCI) algorithm implemented in the $\mathrm{causallearn}$ package to perform step 1. We use the kernel-based conditional independence test [Zhang et al. 2011].
[1] Zhang K, Peters J, Janzing D, Schölkopf B. Kernel-based conditional independence test and application in causal discovery. UAI. 2011.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the reviewers for answering my questions. Given that the author(s) addressed my concerns in detail, and given the overall discussion and positive scores from the other reviewers, I will raise my score accordingly. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to all the reviewers for their efforts and valuable comments. We are particularly pleased to hear that our work addresses an important and challenging problem (3AR7,Y9xG, GTBD, XP2K) and that our method is considered novel, solid (3AR7,Y9xG,7zAn), and well-presented (3AR7,Y9xG, GTBD,7zAn, XP2K).
Regarding the concerns raised, we first would like to emphasize that all our assumptions, including the faithfulness assumption, the SVAR assumption, and the self-causation (also known as autocorrelation) assumption, are commonly made in the literature [13-15, 26-27].
Furthermore, to demonstrate the scalability of our method, we conducted experiments on graphs with varying numbers of nodes $d=\{5,15,25,35,45\}$. The results are presented in Fig. 1 in the supplementary PDF. As shown, our algorithm consistently performs well ($F_1$-score = $\{0.94,0.94,0.93,0.91,0.90\}$) across different scales. These results validate that our method can effectively handle problems of various sizes.
Pdf: /pdf/d5746cc138b12a7b193d02f55aae1de31383e78c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the problem of subsampled time series in causal discovery, in which the unobserved time steps may lead to the existence of latent confounders. To this end, this paper proposes a constraint-based algorithm by leveraging proxy variables to remove the bias induced by latent confounders. The experimental results verify the effectiveness of the proposed algorithm.
Strengths: 1. This paper addresses the problem of subsampled time series in causal discovery, which is important but challenging.
2. The paper is well-structured and written.
3. The experimental results show that the proposed method outperforms several representative baselines.
Weaknesses: 1. In Theorem 2.8, extra assumptions are required for testing conditional independent relations in related literature, but they are not discussed in this paper.
2. How to search the proxy variable of target hidden variables. If an invalid proxy variable is selected, what is the output of the proposed algorithm?
3. In simulation data, the dimension of the random graph is only five. Can you show the performance of the proposed method in a larger-scale network?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Refer to Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Refer to Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts and valuable suggestions on our paper. We address your concerns below.
**Q1.** In Theorem 2.8, extra assumptions are required for testing conditional independent relations in related literature, but they are not discussed in this paper.
**A**: These assumptions are mentioned in Asm. 2.6 and Rem. 2.7.
**Q2.** How to search the proxy variable of target hidden variables. If an invalid proxy variable is selected, what is the output of the proposed algorithm?
**A**: The proxy of each hidden variable is itself at some observable time in the future, so no searching is needed. We mentioned this in Thm. 3.5 (line 190).
**Q3.** In simulation data, the dimension of the random graph is only five. Can you show the performance of the proposed method in a larger-scale network?
**A**: Our method is consistently accurate under different scales. Please refer to the supplementary PDF for details.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My score will remain unchanged. | null | null | null | null | null | null |
Doubly-Robust Self-Training | Accept (poster) | Summary: This paper presents a very simple approach to semi-supervised learning that utilizes both labeled and unlabeled datasets. When there is a large amount of unlabeled data available, following the same distribution as the labeled dataset, the most effective method to leverage this unlabeled data for training is self-training, where pseudo labels are generated and used for training. The main limitation of self-training is that the performance can degrade when the pseudo-labels produced by the predictor for the unlabeled set are not accurate.
In this paper, the authors propose a very simple method to overcome this limitation by replacing the conventional loss for the labeled dataset and the unlabeled dataset in self-training. This modification allows the model to be trained effectively in both cases, whether the pseudo-labels are correct or not, leading to improved performance in all scenarios. The effectiveness of this approach is experimentally demonstrated on image classification benchmarks and 3D object detection benchmarks.
Strengths: Overall, the method is very simple, intuitive, and quite novel. The paper is well-written and it provides a well-derived explanation using equations for both cases where the pseudo labels are accurate and when they are not. Additionally, the method is thoroughly analyzed from a theoretical perspective.
Weaknesses: Although the theoretical derivation demonstrates the soundness of the method, there is doubt regarding its effectiveness in experiments.
1. Especially, in the experiments (sec. 3.1), the authors use curriculum-based loss in each epoch. With $\alpha_t < 1 $, the proposed method will behave like self-training (exactly when $\alpha_t = \frac{n}{m+n}$) and the behavior/effectiveness of the proposed method could not be well represented in this case.
2. Image classification experiments were conducted using ImageNet-100. However, when the labeled set ratio is 100%, the top-1 accuracy of DaVIT and ResNet50 is remarkably lower at 47.8% and 46.7%, respectively, compared to the reported top-1 accuracy in other papers. Following the paper of DaVit and ResNet, the top-1 accuracy on the more complex task of ImageNet1k, compared to ImageNet-100, using DaVit-tiny and ResNet is 82.8% and 79.26%, respectively. Following [1], the top-1 accuracy of various resnet50-based methods for ImageNet-100 consistently surpasses 70%. It appears that the baseline has not been sufficiently well-trained.
3. There is a lack of comparison with other semi-supervised methods. While this method compares with the basic self-training loss, it is necessary to compare it with various methods that utilize pseudo-labels for self-training [2], [3], [4]. Most semi-supervised image classification methods have performed experiments on benchmark datasets such as ImageNet1k and CIFAR100. These methods achieve a top-1 accuracy of 72% or higher when using a labeled dataset of 10% on ImageNet1k. Therefore, it is necessary to compare the effectiveness of this method on these benchmark datasets.
4. Similarly, in semi-supervised 3D object detection, it is necessary to compare the results with existing baseline methods [5], [6].
5. Although this paper proposes a method for the distribution mismatch case in Section 2.4, estimating the probability values of each sample, p(x) and q(x), for each distribution is not easy. Practical methods for this issue have not been provided, and experiments on this aspect are also lacking. Practical algorithms and experiments need to be provided in order to address this concern.
[1] Zelin Zang, et al. DLME: Deep Local-flatness Manifold Embedding. ECCV 2022
[2] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems, 2020
[3] Kihyuk Sohn, David Berthelot, Zizhao Li, Chun-Liang Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In IEEE Conference on Computer Vision and Pattern Recognition, 2020
[4] Hieu Pham, Qizhe Xie, Zihang Dai, and Quoc V Le. Meta pseudo labels. In IEEE Conference on Computer Vision and Pattern Recognition, 2021
[5] He Wang, Yezhen Cong, Or Litany, Yue Gao and Leonidas J. Guibas. 3DIoUMatch: Leveraging IoU Prediction for Semi-Supervised 3D
Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2021
[6] Na Zhao, Tat-Seng Chua and Gim Hee Lee. SESS: Self-Ensembling Semi-Supervised 3D Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2020
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Future work is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We have corrected all the typos suggested. Please find our responses to each comment below.
## Comment 1
**Reviewer:**
> In the experiments (sec. 3.1), the authors use curriculum-based loss in each epoch. With $\alpha_t<1$, the proposed method will behave like self-training (exactly when $\alpha_t = \frac{n}{m+n}$) and the behavior/effectiveness of the proposed method could not be well represented in this case.
**Response:**
Thank you for your comments! Our curriculum-based loss only matches the original self-training loss when exactly $\alpha_t=0$ (which reduces to $\frac{1}{m+n}\sum_{i=1}^{m+n} \ell_\theta(X_i, \hat f(X_i))$). For any $\alpha_t>0$, it is an interpolation between the original self-training loss and our proposed loss function. Our experimental results also demonstrate a significant improvement over the original self-training loss with $\alpha_t$ always equal to $0$.
We would like to also clarify that even when $\alpha_t = \frac{n}{m+n}$, this loss is **very different from self-training loss**. In this case, our loss becomes $\frac{1}{m+n}\sum_{i=1}^{m+n} \ell_\theta(X_i, \hat f(X_i)) - \frac{1}{m+n}\sum_{i=m+1}^{m+n} \ell_\theta(X_i, \hat f(X_i)) + \frac{1}{m+n}\sum_{i=m+1}^{m+n} \ell_\theta(X_i, Y_i)$. Note that none of the terms cancel since the second and third term is averaging over the labeled samples and reweighted, while the first term is average over both label and unlabeled samples. In contrast, the original self-training loss is $\frac{1}{m+n}\sum_{i=m+1}^{m+n} \ell_\theta(X_i, \hat f(X_i))$. Thus our method is fundamentally different from the original self-training loss whenever $\alpha_t>0$.
## Comment 2
**Reviewer:**
> Image classification experiments were conducted using ImageNet-100. However, when the labeled set ratio is 100%, the top-1 accuracy of DaVIT and ResNet50 is remarkably lower at 47.8% and 46.7%, respectively, compared to the reported top-1 accuracy in other papers. Following the paper of DaVit and ResNet, the top-1 accuracy on the more complex task of ImageNet1k, compared to ImageNet-100, using DaVit-tiny and ResNet is 82.8% and 79.26%, respectively. Following [1], the top-1 accuracy of various resnet50-based methods for ImageNet-100 consistently surpasses 70%. It appears that the baseline has not been sufficiently well-trained.
**Response:**
- Thank you for your suggestions. **We have included experiments for sufficiently well-trained ImageNet-100 results (training for full 300 epochs) in Figure 1 of the uploaded one-page PDF, and well-trained CIFAR-10-4K, CIFAR-100-10K in Table 1 of the PDF / General Response.** Even in these cases, our method still gives a universal improvement over the baselines.
- Our method gives better improvement when the teacher model is not extremely accurate. If the teacher model is always correct, then our method reduces to the original self-training procedure. This is why previously we focus on the case when the baseline is not sufficiently well-trained. We will add more discussions and all the comparisons including well-trained case and less-trained case in the revision.
## Comment 3
**Reviewer:**
> There is a lack of comparison with other semi-supervised methods. While this method compares with the basic self-training loss, it is necessary to compare it with various methods that utilize pseudo-labels for self-training [2], [3], [4]. Most semi-supervised image classification methods have performed experiments on benchmark datasets such as ImageNet1k and CIFAR100. These methods achieve a top-1 accuracy of 72% or higher when using a labeled dataset of 10% on ImageNet1k. Therefore, it is necessary to compare the effectiveness of this method on these benchmark datasets. Similarly, in semi-supervised 3D object detection, it is necessary to compare the results with existing baseline methods [5], [6].
**Response:**
Thank you for your suggestion! **We have included comparisons with other 11 SOTA baselines in Table 1 the uploaded new PDF.** Overall, our method still gives the best improvement, and can be combined with existing baselines. We will add the results and more discussions in the revision as well.
## Comment 4
**Reviewer:**
> Although this paper proposes a method for the distribution mismatch case in Section 2.4, estimating the probability values of each sample, p(x) and q(x), for each distribution is not easy. Practical methods for this issue have not been provided, and experiments on this aspect are also lacking. Practical algorithms and experiments need to be provided in order to address this concern.
**Response:**
Thank you for your comments! We agree that estimating the importance ratio can be hard for practical scenarios when we don't have access to the marginal distributions. We are happy to provide simulated results for this case. However, we would also like to point out that most of the self-training pipeline is based on the setting when there is no distribution mismatch between the labeled and unlabeled samples. Our proposed method makes a first attempt towards provable method to address the distribution mismatch phenomenon, and explains the name `doubly-robust' in statistics. However, we will mark that the proposed method for distribution shift may not be practical due to the difficulty in estimating the importance ratio, which is left as an open problem for future research.
\
We wish that our response has addressed your concerns, and turns your assessment to the positive side. If you have any questions, please feel free to let us know during the rebuttal window. We appreciate your suggestions and comments! Thank you! | Summary: The paper proposed a doubly robust loss for self-training. The proposed loss is analysed and shown to have preferable theoretical properties.
Strengths: 1. The idea is interesting: a simple change from 1/(m+n) to 1/n (line 51 - 53) lead to a doubly robust loss function for self-training.
2. The writing is clear and easy to follow
Weaknesses: 1. While the proposed doubly robust loss for self-training enjoys theoretical advantages, directly minimizing the loss during network training leads to instability. The actual loss used in line 219 is very different especially in the early epochs.
2. Lack of comparison to other stronger semi-supervised learning baselines. For example, the authors discuss MixMatch and FixMatch as related work but did not compare with them in experiments.
minor:
1. I don't think the description in line 79-87 is precise. MixMatch/FixMatch does not pre-trained a teacher model on 'labeled' data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We have corrected all the typos suggested. Please find our responses to each comment below.
## Comment 1
**Reviewer:**
> While the proposed doubly robust loss for self-training enjoys theoretical advantages, directly minimizing the loss during network training leads to instability. The actual loss used in line 219 is very different especially in the early epochs.
**Response:**
Thank you for your comments! We would like to mention that the actual loss is an interpolation between two losses: in the early epochs, it is close to the original pseudo-labeling method, which first uses the pseudo-labels to learn a student model that is similar to the teacher model. In the later epochs, it utilizes the new proposed loss function to correct the learned student model. We find that it stabilizes the training greatly.
## Comment 2
**Reviewer:**
> Lack of comparison to other stronger semi-supervised learning baselines. For example, the authors discuss MixMatch and FixMatch as related work but did not compare with them in experiments.
**Response:**
Thank you for the comments! We have included comparisons with other baselines in **the uploaded new PDF / General Response**, which compares against MixMatch, FixMatch, and another 9 baselines on CIFAR-10, CIFAR-100 datasets. We will add it in the revision as well.
## Comment 3
**Reviewer:**
> I don't think the description in lines 79-87 is precise. MixMatch/FixMatch does not pre-trained a teacher model on 'labeled' data.
**Response:**
Thank you for your suggestion! We have corrected our paper to make this part precise. MixMatch / FixMatch are methods that do not rely on pre-training a teacher model on the labeled dataset.
\
Thanks again for your time and effort! For any other questions, please feel free to let us know during the rebuttal window.
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal partly address my concerns.
On the one hand I appreciate the simplicity and favorable theoretical properties of the proposed method, but on the other hand, the additional experiments shows that the performance improvement seems modest.
I keep my score unchanged.
---
Reply to Comment 1.1.1:
Title: Thank you for your response!
Comment: Thank you for your response! We appreciate your time for providing valuable suggestions and comments, which help greatly improve our paper.
To add one additional note, our additional experiments mostly work with the case when the teacher model is close to 90% or higher accuracy. In this case, it is expected that the gain for a better teacher model is smaller compared to the gain for a worse teacher model. When the teacher model is perfect (100% accurate), our method reduces to the pseudo-labeling method. And the gain will be 0 compared to naive pseudo-labeling. Thus when the teacher model is close to perfect, there will only be marginal improvement compared with the pseudo-labeling method. However, if we already have a very accurate teacher model, the necessity of re-training a new student model is also unclear.
Our method really shines when it is uncertain how good the teacher model is, or when the teacher model is not a perfect predictor. This is reflected in our original experiments in the paper. And we can show that even when the teacher model is very accurate, our proposed method can still achieve SOTA performance among all 12 estimators considered. | Summary: The authors propose a very simple yet effective modification to the original loss for self-training by re-weighting terms of the loss function making. This change effectively balances between using the pseudo-labels when the predictor is strong and learning to not use it when it is unreliable, making it doubly robust. They provide a sound theoretical analysis and empirical evaluation on classification and object detection tasks substantiating their claims.
Strengths: - Strengths
- The paper is well written and easy to follow with sufficient background and motivated examples given to present the chain of reasoning well.
- For linear predictor, the proposed loss is unbiased with lower variance which is strictly better than self-training
- The results on both classification and 3D object detection highlight clear improvement over standard self distillation
- The technical novelty in the paper is limited with just a small modification to the overall loss. But the theoretical insights including guarantees for general loss and positive experimental results make it a meaningful contribution. The simplicity of the modification also make it much more likely to be adopted and have higher impact.
Weaknesses: - Weaknesses
- The analysis provided is for very simplistic settings of linear predictor or mean-predictions and it’s unclear how much of it translates to realistic settings of over-parameterized deep nets trained on SGD.
- For image classification, the baselines used in the paper are meaningful but thorough comparison with other state of the art self-imitation methods like noisy-student etc is lacking.
- The evaluation is restricted to lower data regimes as the model shines when the training data is limited, making their evaluation a bit more contrived as compared to real world scenarios. It seems the proposed loss should shine more when unlabelled data scales, so some experimentation on the impact on performance as the unlabelled data scales would also be interesting to see
- One important use of self-distillation is using unlabelled data to show domain adaptation to that domain. I would like to see experiments for that as well.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * In Section 2.1, the intuitive interpretation for when the predictor is bad is not super clear and paper would benefit from some elaboration. If just m goes to infinity, how would the two terms cancel out
* In the case of distribution mismatch, what’s the motivation for changing the first term to average of (n) sample instead of (m+n)? Also how are $\pi (x)$ ie importance weights chosen?
* Did the authors explore tuning the predictor $\hat{f}$ on the labelled examples before generating the pseudo-examples to train the student?
Typos and possible errors
- Inconsistent notation : line 155 should be $\hat{\theta}$ instead of $\theta^{*}$
- In line 161, I believe the upper bound should be $6/n(var[Y] ..) $ instead of $4/n$. Please verify!
- in equation in line 171, it should be $\theta_{SL}$ instead of $\theta_{DR}$
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately addressed some limitations. Other suggestions are listed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We have corrected all the typos suggested. Please find our responses to each comment below.
## Comment 1
**Reviewer:**
> The analysis provided is for very simplistic settings of linear predictor or mean-predictions and it’s unclear how much of it translates to realistic settings of over-parameterized deep nets trained on SGD.
**Response:**
Thank you for your comments! We would like to clarify that we have a **guarantee for arbitrary loss (including deep neural network) in Theorem 2 in Section 2.3**. This shows that even in the case of deep learning, our proposed algorithm still converges to a good point. In contrast, the existing method fails even in the simplest case of mean estimation. Our section 2.2 on mean estimation is only a motivating example for the general case.
## Comment 2
**Reviewer:**
> For image classification, the baselines used in the paper are meaningful but thorough comparison with other state-of-the-art self-imitation methods like noisy-student etc is lacking.
**Response:**
Thank you for the comments! We have included comparisons with other baselines in the uploaded new PDF, which compares against another 12 baseline algorithms in CIFAR-10 and CIFAR-100 dataset. We will add more comprehensive comparisons in the revision as well.
## Comment 3
**Reviewer:**
> One important use of self-distillation is using unlabelled data to show domain adaptation to that domain. I would like to see experiments for that as well.
**Response:**
Thank you for your comments! We have included theoretical results on the right algorithm for distribution mismatch. However, the proposed method requires knowledge about the importance ratio. Making the proposed algorithm practically implementable still remains an open problem.
## Comment 4
**Reviewer:**
> In Section 2.1, the intuitive interpretation for when the predictor is bad is not super clear and paper would benefit from some elaboration. If just $m$ goes to infinity, how would the two terms cancel out?
**Response:**
Thank you for your comments! The finite-sample guarantee when none of $m, n$ goes to infinity is given in Section 2.2 for mean estimation, and Section 2.3 for general loss functions. In short, if only $m$ goes to infinity, there will an additional noise introduced whose standard deviation is proportional to $1/\sqrt{n}$. This can be seen from the third equation in Proposition 1, or Theorem 2.
## Comment 5
**Reviewer:**
> In the case of distribution mismatch, what’s the motivation for changing the first term to average of $(n)$ sample instead of $(m+n)$? Also how are the importance weights chosen?
**Response:**
Thank you for your comments! Sorry here is a minor typo. We would like to change the first term from the average of $(m+n)$ samples to the average of $m$ samples. This is due to that the marginal distribution of the first $m$ (unlabeled) samples is different from the distribution of the last $n$ (labeled) samples. If we mix both distributions together, the resulting distribution won't be the same as the distribution of the unlabeled samples. And in this case, we cannot use let the first two terms cancel each other due to distribution mismatch. Thus in the first term, we only take average with the $m$ unlabeled samples. In practice, the importance weights are required to be known or estimated from the representations of the features $X$.
## Comment 6
**Reviewer:**
> Did the authors explore tuning the predictor on the labelled examples before generating the pseudo-examples to train the student?
**Response:**
Thank you for your comments! Yes, we select the best checkpoint for the predictors on the labeled examples. And then generate the pseudo-examples to train the students. This applies to both the baseline algorithms and our proposed algorithm.
\
Thanks again for your time and effort! For any other questions, please feel free to let us know during the rebuttal window.
---
Rebuttal Comment 1.1:
Comment:
Thank you, the authors have addressed some concerns and weaknesses. The newly added results also look promising. I would like increase the rating to accept. | Summary: This paper proposes a pseudo-labeling approach that balances out the supervised signal between the labeled and incorrect pseudo-labeled datapoints during the training process. The aim is to only account for the pseudo-labels when they are correctly labeled, which may happen when the covariate distribution of the unlabeled dataset and the labeled dataset matches. They show some analysis of the proposed loss and results in ImageNet100 (a subset with 100 random classes from ImageNet-1k) and mini-ImageNet100 and nuScenes dataset.
Strengths: The proposed method is clearly written, and the paper can easily be understood.
The paper with all necessary details and deep explanations for experimental results.
Comprehensive algorithmic analysis and clear motivation.
The proposed method effectively improves over vanilla pseudo-labeling [1]
[1] D.-H. Lee. Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks. ICML 2013 Workshop : Challenges in Representation Learning (WREPL), 07 2013.
Weaknesses: Novelty and missing prior work: Pseudo-labeling is the defacto method for entropy regularization techniques in semi-supervised learning problems. This is a paper that tackles the semi-supervised learning problem. Recent work has explored different ways to mitigate the error propagation from the teacher model and confirmation bias present in pseudo-labeling approaches. For example, [2,3,4] investigate the thresholding effect via fixed and curriculum based approaches, with flexible thresholds that are dynamically adjusted for each class according to the current learning status. With such prior exploration and no comparison with any of that work, it's difficult to assess the importance and impact of this work, which seems limited and incomplete.
The paper in its current state also fails to provide technical details to validate fair comparisons in the main text. Furthermore, no ablations for any of the technical selections are conducted.
The alternative loss proposed for distribution mismatch (Section 2.4) is only shown in the method but not in the empirical section.
[2] Eric Arazo, Diego Ortego, Paul Albert, Noel E O’Connor, and Kevin McGuinness. Pseudolabeling and confirmation bias in deep semi-supervised learning. In IJCNN, pages 1–8, 2020.
[3] Paola Cascante-Bonilla, Fuwen Tan, Yanjun Qi, and Vicente Ordonez. Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6912–6920, 2021.
[4] Zhang B, Wang Y, Hou W, Wu H, Wang J, Okumura M, Shinozaki T. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. Advances in Neural Information Processing Systems. 2021 Dec 6;34:18408-19.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Prior literature [2,3,4,5] has shown that predefined threshold values impact the overall performance in pseudo-labeling. Using 0.3 as the threshold seems to be too permissive, allowing too many noisy pseudo-labels. How sensitive is this threshold in your setup?
[5] Oliver A, Odena A, Raffel CA, Cubuk ED, Goodfellow I. Realistic evaluation of deep semi-supervised learning algorithms. Advances in neural information processing systems. 2018;31.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No limitation section is provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your valuable comments and suggestions. Please find our responses to each comment below.
## Comment 1
**Reviewer:**
> Novelty and missing prior work: Pseudo-labeling is the defacto method for entropy regularization techniques in semi-supervised learning problems. This is a paper that tackles the semi-supervised learning problem. Recent work has explored different ways to mitigate the error propagation from the teacher model and confirmation bias present in pseudo-labeling approaches. For example, [2,3,4] investigate the thresholding effect via fixed and curriculum based approaches, with flexible thresholds that are dynamically adjusted for each class according to the current learning status. With such prior exploration and no comparison with any of that work, it's difficult to assess the importance and impact of this work, which seems limited and incomplete.
**Response:**
Thank you for your comments!
- We are happy to include comparisons with existing methods [2-4]. Please find the **Table 1 in the uploaded PDF / General Response** as our comparisons with other existing baselines. Due to the long-running time for all methods, for now we only include the comparisons with [2]. We also compare with other existing 11 benchmark algorithms for image classification in Table 1 of the uploaded PDF. One can see that our method still achieves better performance than existing methods.
- Our proposed methodology is **fundamentally different from the existing pseudo-labeling idea in [2-4]**. In [2-4], the effect of thresholding is extensively studied. However, when the teacher model is very inaccurate, one can never expect the confidence of each label is given correctly, and thus the thresholding would not provide gain for this case. In fact, we can show that even in the simplest case of mean estimation, all the methods in [2-4] still fail to provide right solution even with infinite number of samples.
- As an orthogonal approach, we propose a simple loss function that automatically uses the labeled samples to test the validity of the teacher model. It is guaranteed to perfectly interpolate both cases: when the teacher model is completely wrong, we will only use the labeled samples; when the teacher model is completely correct, it will use all pseudo-labels. This is not achievable by the existing algorithms in [2-4] since the confidence on each label might be inaccurate as well.
- Our proposed method is **always unbiased and guaranteed to converge to the right solution, in contrast to any existing methods**. We show theoretically that the existing pseudo-labeling idea will give very poor performance when the teacher model is inaccurate. And only the proposed doubly-robust estimator will remain unbiased even in the simplest case such as mean estimation. Our experimental results validate our theoretical predictions.
- As we also show in the experiments for 3D object detection, our method can be directly combined with any threshold method in [2-4] and improve over the existing threshold-based method. We believe the proposed method is a very important and necessary alternative / add-on for all the existing pseudo-labeling ideas.
## Comment 2
**Reviewer:**
> The paper in its current state also fails to provide technical details to validate fair comparisons in the main text. Furthermore, no ablations for any of the technical selections are conducted. The alternative loss proposed for distribution mismatch (Section 2.4) is only shown in the method but not in the empirical section.
**Response:**
Thank you for the comments!
- We will add the details about all the hyperparameters in the revised draft.
- We **have ablation studies for both the curriculum settings and the number of epochs** in Appendix B. We have also added the new ablation study in Table 3 for the pseudo-labeling threshold in the uploaded new PDF file, we will add more data points and ablations on the original pseudo-labeling methods in the final revision.
- The alternative loss for distribution mismatch requires the knowledge of the importance ratio between the target distribution and the original data distribution. We are happy to include simulated results. However, estimating such importance ratio in practical scenarios can be hard.
## Comment 3
**Reviewer:**
> Prior literature [2,3,4,5] has shown that predefined threshold values impact the overall performance in pseudo-labeling. Using 0.3 as the threshold seems to be too permissive, allowing too many noisy pseudo-labels. How sensitive is this threshold in your setup?
**Response:**
Thank you for your comments! We have added the new ablation study for the pseudo-labeling threshold in **Table 3 of the uploaded rebuttal PDF / General Response**. One can see that if we improve the threshold from 0.3 to 0.5 to allow less noisy pseudo-labels, the performance indeed downgrades. We will include more data points and the ablations for the original pseudo-labeling method in the final revision.
\
We wish that our response has addressed your concerns, and turns your assessment to the positive side. If you have any questions, please feel free to let us know during the rebuttal window. We appreciate your suggestions and comments! Thank you!
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed explanation and additional experimental results, I've also read the thoughtful discussion between other reviewers and the authors that helped me clarify some of my questions. However, experimental results show marginal improvements, along with important baselines missing.
It is also not clear what the authors mean by: "we propose a simple loss function that automatically uses the labeled samples to test the validity of the teacher model" -- pseudo-labeling methods do the same at each iteration, and the teacher model is validated using a validation set, which contains the true labels; thus, it is hard to say the proposed method does something different from traditional pseudo-labeling and it's variations.
In addition, pseudo-labeling approaches are really fast to train, and the datasets are very small. Unfortunately, table one only shows comparisons against consistency regularization methods. Given the nature of the proposed approach, it is important to make fair comparisons with existing entropy regularization methods. Thus, I updated my score to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! Below is a clarification to our previous message "we propose a simple loss function that automatically uses the labeled samples to test the validity of the teacher model" and its difference with traditional pseudo-labeling method:
Our main message is the sentence after the quoted sentence: such simple doubly-robust loss leads to a guaranteed unbiased estimation in the case of both mean estimation and general neural networks. In contrast, even though the teacher model is validated using a validation set, training with traditional pseudo-labeling based methods is still biased. We agree that in some algorithms, pseudo-labeling methods also validate and filter the unlabeled samples based on the ground truth labels. So we will make it precise in our revision and add more clarifications on the main differences between our algorithms and traditional pseudo-labeling methods.
Thank you again for your comments and suggestions! | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the valuable comments and suggestions, which help us greatly improve our paper. Besides individual responses, we summarize the revision and new experimental results in the one-page PDF uploaded. We also include the markdown table for your reference, which are the same results as the PDF.
### Additional Experiment 1 (Table 1): Comparisons with previous SOTAs on CIFAR-10 and CiFAR-100
In Table 1 of the uploaded PDF, we compare with **another 11 baselines** in terms of error rate on CIFAR-10-4K and CIFAR-100-10K under the same settings (i.e., Wide ResNet-28-2 for CIFAR-10 and WRN-28-8 for CIFAR-100). We show that our method is only 0.04 inferior to the best method Meta Pseudo Labels for CIFAR-10-4K, and achieves the best performance for CIFAR-100-10K. (The numbers are not reported in some of the methods in CIFAR-100-10K, we will try to re-implement and re-run these methods in the future.)
| Method | CIFAR-10-4K (error rate, \%) | CIFAR-100-10K (error rate, \%) |
| ------------------ | ------------------- | --------------------- |
| Pseudo-Labeling | 16.09 | 36.21 |
| LGA + VAT | 12.06 | -- |
| Mean Teacher | 9.19 | 35.83 |
| ICT | 7.66 | -- |
| SWA | 5.00 | 28.80 |
| MixMatch | 4.95 | 25.88 |
| ReMixMatch | 4.72 | 23.03 |
| EnAET | 5.35 | -- |
| UDA | 4.32 | 24.50 |
| FixMatch | 4.31 | 23.18 |
| Meta Pesudo Labels | **3.89** | -- |
| **Ours** | 3.93 | **22.30** |
### Additional Experiment 2 (Figure 1): Sufficiently well-trained ImageNet-100
In our original experiments, we mostly focus on a teacher model that is not super accurate, since our method reduces to the original pseudo-labeling when the teacher model is completely correct for all labels. In this experiment, we fully train the teacher model with 300 epochs on ImageNet-100, leading to the accuracy of the teacher model as high as 88.4%. We show that even in this case, our method outperforms the original pseudo-labeling baseline.
| Data Fraction | Labeled Only (acc, \%) | Pseudo + Labeled (acc, \%) | Ours (acc, \%) |
| ---- | ---- | ---- | ---- |
| 20 | 63.59 | 64.87 | **67.16** |
| 50 | 81.23 | 82.92 | **85.47** |
| 80 | 85.50 | 86.98 | **87.57** |
| 100 | 88.01 | 89.43 | **90.61** |
### Additional Experiment 3 (Table 2): Comparisons with previous SOTAs on nuScenes object detection dataset
We compare with the idea of pseudo-labeling + confirmation bias in [1]. And show that on the object detection dataset, our method still gives better performance for various labeled data fractions. Due to the time limit, we are still not yet finished all the experiments with varying labeled fractions and other methods. We will include more comparisons in the final revision.
| Labeled Fraction | Labeled Only (mAP↑) | Labeled Only (NDS↑) | Labeled + Pseudo (mAP↑) | Labeled + Pseudo (NDS↑) | Doubly robust Loss (mAP↑) | Doubly robust Loss (NDS↑) | Pseudo-Labeling + Confirmation Bias (mAP↑) | Pseudo-Labeling + Confirmation Bias (NDS↑) |
|:----------------:|:-------------------:|:-------------------:|:-----------------------:|:-----------------------:|:-------------------------:|:-------------------------:|:-------------------------------------------:|:-------------------------------------------:|
| 1/24 | 7.56 | 18.01 | 7.60 | 17.32 | **8.18** | **18.33** | 7.80 | 16.86 |
| 1/16 | 11.15 | 20.55 | 11.60 | 21.03 | **12.30** | 22.10 | 12.15 | **22.89** |
### Additional Experiment 4 (Table 3): Ablation on the detection thresholds
We include ablation studies for different detection thresholds, showing that our choice of 0.3 is not too pessimistic. We will include more data points and the results on the original pseudo-labeling methods in the final revision.
| Labeled Data Fraction | τ = 0.3 (mAP↑) | τ = 0.3 (NDS↑) | τ = 0.5 (mAP↑) | τ = 0.5 (NDS↑) |
|:----------------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| 1/24 | **8.18** | **18.33** | 4.37 | 13.17 |
| 1/16 | **12.30** | **22.10** | 8.09 | 19.70 |
### Novelty and Contributions
We would like to remark here that our proposed method is fundamentally different from **all existing pseudo-labeling-based methods.**
- Our proposed method is the first to have convergence guarantee for any loss functions, including neural networks. (see Theorem 2). While most of the SOTA methods are built up on the original pseudo-labeling idea, which is biased even for the simplest setting of mean estimation.
- Based on the experimental results, our proposed method improves over almost all existing baselines, showing the power of the doubly-robust idea.
- Our method is the first that connects and unifies the idea of doubly robust estimator in statistics for propensity score estimation, the prediction-powered inference idea for confidence estimation, and the self-training idea in knowledge distillation and neural network training.
- Our method proposes an alternative loss that uses the labeled data to `correct' the unlabeled samples. It can also be combined with any of the existing pseudo-labeling methods to further improve their performance.
We hope that our pointwise responses below could clarify all reviewers’ confusion. We thank all reviewers’ time again and we are always ready to solve your concerns.
Pdf: /pdf/c3804d0fa10c300279d8ac8c673040d72d921f62.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Supply-Side Equilibria in Recommender Systems | Accept (poster) | Summary: The paper presents a theoretical study regarding the equilibrium of supply-side competition in recommender systems. In particular, the analysis examined when and how the recommender system influence producers' creation of online contents and will specialization occur under the effect of recommender systems.
Strengths: 1. Examining the supply-side competition under the influence of recommender system is an interesting topic to explore.
2. The paper presents theoretical analysis for the equilibrium of supply-side competition and specialization.
Weaknesses: 1. The theoretical analysis is based on a very simplified recommendation model, i.e, inner-product between the user and item vectors which is basically from the idea of matrix factorization. However, modern recommender systems have advanced much more beyond MF, and real-world systems are driven by deep neural networks and large language models. As a result, it's not clear if the theoretical analysis really sheds light on real-world recommender systems.
2. There is no experimental analysis, even simulated analysis is missing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. To what extent the theoretical analysis reflects real-world recommender systems that are driven by deep learning and LLMs.
2. Is there any experimental analysis to show the predicted effects, at least based on simulation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and respond to their questions below. We present new results that address some of their concerns: in particular, **a new empirical analysis on the MovieLens-100K dataset** that validates and goes beyond our theoretical findings (see the General Response).
**“Is there any experimental analysis to show the predicted effects”**
We provide a new empirical analysis of supply-side equilibria of nonnegative matrix factorization on the MovieLens-100K dataset. These experiments beyond qualitative insights that validate our theoretical results and also provide intuition for the structure of supply-side equilibria that goes beyond our theoretical results.
- We compute the direction of the single-genre equilibrium embedding (using Corollary 5) for several different cost functions (see Figures N1-N2 in the General Response pdf). (Recall that this direction is equal to the genre chosen by producers when there is no specialization at equilibrium.) Figures N1-N2 confirm our theoretical findings that the location of the genre does not seem to permit a clean closed-form characterization (Corollary 5) and depends on subtleties of the cost function. Moreover, Figures N1-N2 go beyond our theoretical results to illustrate that the genre realized in a marketplace can be heavily influenced by some (but not other) aspects of producer costs.
- We compute the boundary where specialization starts to occur, using the upper bound from Corollary 4 (see Figure N3 in the General Response pdf). As $D$ increases, specialization is more likely to occur in a given marketplace. The intuition is that as $D$ increases, the user embeddings become more heterogeneous, which increases the likelihood of specialization, as suggested by our theoretical results in Section 3. Figure N3 further indicates that the platform may influence the level of specialization by setting the embedding dimension $D$ to be higher or lower. We also show how whether specialization occurs subtly depends on the cost function structure, which also aligns with our theoretical results from Section 3.
We refer the reviewer to the General Response for details of the empirical setup and results.
**“The theoretical analysis is based on a very simplified recommendation model, i.e, inner-product between the user and item vectors which is basically from the idea of matrix factorization…To what extent the theoretical analysis reflects real-world recommender systems that are driven by deep learning and LLMs?”**
Our formalism is not limited to simple matrix factorization algorithms. If we interpret user and movie vectors as “embeddings” *learned* by the algorithm, our formalism can also capture any deep learning-based recommendation systems that learn embeddings for the users and movies. In particular, the process by which these embeddings are learned can be arbitrarily complex, as long as the recommendation system ultimately evaluates user value based on the inner product of the learned embeddings.
As to language-model based recommendation systems, it is not clear if our model is a perfect fit for these newer systems. Understanding the supply-side equilibria of language model-based recommendation systems would be a very interesting direction for future work. We hope that our analysis of specialization inspires future work on this topic.
As a broader point about the stylized nature of our model, we refer the reviewer to “Nature of our contribution” in the General response. We don’t think that incorporating all of the complexities of real-world recommender systems into our model would have improved our analysis of specialization. If a model has too many changing components and degrees of freedom, it is hard to pinpoint exactly which component is responsible for a given phenomena. In general, complex models are often less robust, whereas results from simple models tend to at least qualitatively generalize beyond their assumptions. Our stylized model is one of the simplest that enables us to study specialization (see Appendix A.1).
---
Rebuttal Comment 1.1:
Comment: Reviewer rJ3F: thank you again for your review. We wanted to bring your attention to **our new empirical analysis on the MovieLens dataset in the rebuttal**, which we believe might address your concerns about the lack of empirical validation of our results. We also respond to your other questions in the rebuttal. If you have follow-up questions, we are happy to further clarify any aspect of the paper or the rebuttal in the remaining few days of the discussion period. | Summary: This paper aims to understand the equilibria of the digital content producer side competition in the recommender platforms. It specifically studies the potential of specialization, where different producers create different type of content. The paper proposed to model value of product as the inner product of user and item embeddings, where the personalization, product cost, as well as producer profit are derived from the embedding model. Based on this setup, the paper studies the equilibria of the producer side competition. It provides conditions under which specialization occurs and the form it takes. It claims that specialization can reduce the competitiveness of the marketplace by decreasing competition within each genre.
**Note** It should be noted that I am not deeply familiar with the field of algorithmic game theory. As a result, I may lack an in-depth understanding of the conceptual links between various aspects of the theory used in the paper and the broader field. Therefore, I may not be adequately equipped to fully assess the novelty and correctness of the technical content. While effort has been made to understand and summarize the paper, the detailed insights on the specific game-theoretic models and their implications could benefit from a review by a scholar who is more specialized in this field.
Strengths: * The study of the influence of recommender system on the digital content production and diversity of the content is an important topic.
* The insights drawn from the study of specialization could potentially provide new directions for recommender model design.
Weaknesses: * The current model presented in the paper seems to be specifically tailored towards digital goods such as music, movies, and news. It is uncertain how this model might translate to other types of recommender systems such as e-commerce, dating, or job recommendation platforms, which also hold significant relevance in today's digital marketplace. Although the authors have proposed directions for future work, further investigation is required to expand this model to other sectors and recommendation scenarios, enhancing the universal applicability of the research.
* While the model provides valuable theoretical insights, it could be strengthened by empirical validation. Observations or experiments on real-world recommender systems could further test and substantiate the authors' claims.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Given the theoretical framework proposed in the paper, how can this study be practically utilized, particularly for practitioners who are actively involved in the development of new recommender system models? The practical implications and real-world applications of the theoretical findings would be of interest to the broader research community and industry practitioners. Clarification on this point would further enhance the utility and impact of the work.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations is discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and respond to their questions below. We present new results that address some of their concerns: in particular, **a new empirical analysis on the MovieLens-100K dataset** that validates and goes beyond our theoretical findings (see the General Response).
**“While the model provides valuable theoretical insights, it could be strengthened by empirical validation.”**
We provide a new empirical analysis of supply-side equilibria of nonnegative matrix factorization on the MovieLens-100K dataset. These experiments beyond qualitative insights that validate our theoretical results and also provide intuition for the structure of supply-side equilibria that goes beyond our theoretical results.
- We compute the direction of the single-genre equilibrium embedding (using Corollary 5) for several different cost functions (see Figures N1-N2 in the General Response pdf). (Recall that this direction is equal to the genre chosen by producers when there is no specialization at equilibrium.) Figures N1-N2 confirm our theoretical findings that the location of the genre does not seem to permit a clean closed-form characterization (Corollary 5) and depends on subtleties of the cost function. Moreover, Figures N1-N2 go beyond our theoretical results to illustrate that the genre realized in a marketplace can be heavily influenced by some (but not other) aspects of producer costs.
- We compute the boundary where specialization starts to occur, using the upper bound from Corollary 4 (see Figure N3 in the General Response pdf). As $D$ increases, specialization is more likely to occur in a given marketplace. The intuition is that as $D$ increases, the user embeddings become more heterogeneous, which increases the likelihood of specialization, as suggested by our theoretical results in Section 3. Figure N3 further indicates that the platform may influence the level of specialization by setting the embedding dimension $D$ to be higher or lower. We also show how whether specialization occurs subtly depends on the cost function structure, which also aligns with our theoretical results from Section 3.
We refer the reviewer to the General Response for details of the empirical setup and results.
**“The current model presented in the paper seems to be specifically tailored towards digital goods such as music, movies, and news. It is uncertain how this model might translate to other types of recommender systems such as e-commerce, dating, or job recommendation platforms, which also hold significant relevance in today's digital marketplace.”**
As the reviewer notes, the current model presented in the paper is tailored towards digital content. This is intentional: as discussed in the introduction, the supply-side effects of digital content recommendation platforms are poorly understood, and understanding these particular supply-side effects is the focus of our paper. Other types of recommender systems, such as for dating or for jobs, have different structure, in terms of interaction between participants, incentive and strategic behavior, and the platform’s objective. We believe that grouping these two types of marketplaces into one single model would hinder the derivation of conclusive economic insights.
**“Given the theoretical framework proposed in the paper, how can this study be practically utilized, particularly for practitioners who are actively involved in the development of new recommender system models?”**
One insight from our results is that the practitioners can influence the long-run equilibrium content landscape through their choice of embedding dimension. As shown in our empirical analysis (Figure N3), the embedding dimension impacts the cost function exponent at which specialization starts to occur in a marketplace. What this means is that a practitioner can increase the embedding dimension to increase the level of specialization in the marketplace (whether every producer produces similar genres or very different ones). More generally, our work shows that platforms can influence the amount of specialization through their recommender systems.
As to whether the practitioners should opt to induce specialization or not, this depends on several factors. Our results highlight two consequences of specialization—(1) content diversity, and (2) positive producer profit—which should both impact the platform’s decision.
- Specialization leads to content diversity, which impacts the long-term satisfaction of users and thus the long-run revenue of the platform. On the positive side, content diversity can provide users with content tailored to their interests, which may help attract and retain a wider user base. On the other hand, content diversity and the consumption of niche content may inadvertently drive filter bubbles, polarization, or other negative user experiences.
- Specialization can also lead producers to earn a positive profit at equilibrium (Section 4). This has negative implications for content quality which might reduce the long-term satisfaction of users (and thus lower user retention). On the other hand, this has positive implications for producers, which might improve producer retention.
The platform can use knowledge of its specific marketplace to balance the positive and negative effects, and determine whether specialization improves revenue in a given marketplace.
---
Rebuttal Comment 1.1:
Comment: Reviewer sbgm: thank you again for your review. We wanted to bring your attention to **our new empirical analysis on the MovieLens dataset in the rebuttal**, which we believe might address your concerns about the lack of empirical validation of our results. We also respond to your other questions in the rebuttal. If you have follow-up questions, we are happy to further clarify any aspect of the paper or the rebuttal in the remaining few days of the discussion period. | Summary: This work studies the supply-side equilibria in content recommender platforms. The authors proposed a game-theoretic model to describe content creators' competition and derive necessary and sufficient conditions under which the specialization over genres occurs or does not occur at the equilibrium.
Strengths: The motivation is well justified, and the proposed problem setting is interesting in general.
The technique used for the main result is novel and solid, and the insight behind the theoretical results is well-explained.
Weaknesses: The proposed problem itself and the solution are intriguing, but the specific problem setting considered seems to be oversimplified. In particular, the utility function defined in Eq.(1) is symmetric for different creators, and only the symmetric mixed NE is considered. I understand this is the only feasible solution concept with an existence guarantee in this situation; still, it is not a realistic model to characterize the outcome of creators' competition in practice. In the real world, creators should have heterogeneous preferences and costs, which should be reflected in their distinct strategy sets or cost functions. And also, it is hard to believe in any real market, all creators will eventually form a homogeneous belief about the production strategy (i.e., a symmetric mixed NE).
In terms of the theoretical results, my understanding is that the specialization phenomenon hinges on the joint property of the user population structure and parameter $\beta$. While I appreciate the insight about how user distribution might affect the emergence of specialization, the discussion of $\beta$ seems too restrictive to the specific form of the cost term. The current term $\|p\|^{\beta}$ is too simple to capture the nature of the cost, for example, 1. why the marginal cost has to depend on an exponential factor rather than a multiplicative factor? 2. why the cost only depends on the norm of $p$? What if we generalize the cost term in the following ways:
1. $c_j(p) = a \|p\|^{\beta}$
2. $c_j(p) = a_j \|p\|^{\beta}$
3. $c_j(p) = a \|p-p_j\|^{\beta}$
4. $c_j(p) = a_j \|p-p_j\|^{\beta}$
Can we still derive results that share similar insights using the current technique? If such an extension is promising, I would highly recommend this paper.
Minor:
L.254: should be "linearly independent vectors"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the questions raised in weaknesses.
1. in corollary 3, what does it mean by "N users split equally between two linearly independent vectors?"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and respond to their questions below. In response to the reviewers, we also showed **a new empirical analysis on the MovieLens-100K dataset** that both validates and go beyond our theoretical findings (see the General Response).
**“What if we generalize the cost term in the following ways?”**
We describe the applicability of our cost functions to the functional forms proposed by the reviewers.
- First, our model and results directly accommodate the scaled cost function $c_S(p) = a ||p||^{\beta}$. We can define a new norm $||x||_{S} = ||x \cdot a^{1/\beta}||$, so that $c_S(p) =a ||p||^{\beta} = ||p||_S^{\beta}$. Our characterization from Theorem 1 specifies when specialization occurs for $c_S(p)$: specialization occurs for $c_S(p)$ if and only if specialization occurs for the non-scaled cost function $c(p) = ||p||^{\beta}$.
- Our model and results can also accommodate the translated cost function $c_T(p) = a ||p - q||^{\beta}$. We can handle the scalar factor of $a$ by a similar argument to the above, so WLOG let’s assume that $a =1$. We claim that $X \sim \mu$ is a symmetric mixed equilibrium for $c_T(p) = ||p-q||^{\beta}$ if and only if $(X - q)$ where $X \sim \mu$ is a symmetric mixed equilibrium for $c(p) = ||p||^{\beta}$. The intuition is that we can write $\langle u, p - q \rangle = \langle u, p \rangle - \langle u,q \rangle$, which just shifts a user’s utility by the same constant factor for all producers so it can be disregarded. This gives us an equivalence between the equilibria in these two setups; however, to apply Theorem 1, we’d need to slightly change the definition of genre. We instead define $Genre(\mu)$ to be set of directions $\frac{p-q}{||p-q||}$ for $p \in \text{supp}(\mu)$. That is, the genres object would instead capture the directions along which the producers *change* from the starting point of $q$, rather than the final direction of the producers. (Note: we don’t run into any technical difficulties with the nonnegative orthant constraints, because those turn out to only be required for user vectors and not for producer vectors in our results.)
As described below, we cannot directly capture heterogeneous costs in our model which precludes some of the functional forms proposed by the reviewer (see additional discussion below).
**“The current term $||p||^{\beta}$ is too simple to capture the nature of the cost. For example, why the cost only depends on the norm of p?”**
We allow for any norm (not just the $\ell_2$-norm) within our cost function, which captures a very broad family of functions. For example, we can take the norm $||\cdot||$ to be $\ell_q$ norm for any $q \ge 1$ as well as any weighted cost norm $||x||$ defined to be the $\ell_2$ norm of $[x_1 \cdot \alpha_1, \ldots, x_D \cdot \alpha_D]$ (we study both in our new experiments on the MovieLens-100K dataset, as described in the General Response). In more detail, weighted costs are parameterized by a $D$-dimensional vector $\alpha \in R_{\ge 0}^D$ of weights such that $\sum_{i=1}^D \alpha_i = 1$. The cost function $c_{\alpha}$ is defined to be $c_{\alpha}(p) := ||[p_1 \cdot \alpha_1, \ldots, p_D \cdot \alpha_D]||^{\beta}$ where the norm is the $\ell_2$ norm. This cost function captures that certain dimensions might be cheaper vs. more expensive for the producer to improve. We could also accomodate Mahalanobis distance, matrix norms, etc. within our model.
**“The utility function defined in Eq.(1) is symmetric for different creators, and only the symmetric mixed NE is considered.”**
Our model and results do require that the cost functions are homogeneous. One could have easily extended our model to allow for producers to have heterogeneous cost functions, but we focused on a single cost function to simplify the technical analysis. In fact, the technical analysis for homogeneous cost functions already required several novel technical innovations (see Appendix B.1). We would qualitatively expect that the tendency towards specialization would only be amplified if producers could have heterogeneous cost functions.
That being said, our model does implicitly captures heterogeneity in *producer behaviors* via the randomness in the symmetric mixed equilibrium. Under this randomness, each producer independently samples a content vector from the equilibrium distribution, so different producers create different content in any given realization. In fact, this heterogeneity in producer behavior motivated us to formalize specialization in terms of the support of the symmetric mixed equilibrium distribution (lines 179-188).
**“In corollary 3, what does it mean by "N users split equally between two linearly independent vectors?":**
We assume that $u_1 = .. u_{N/2} = x_1$ and $u_{N/2+1} = … = u_N = x_2$, for some linearly independent vectors $x_1$ and $x_2$.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the response. After carefully digesting the response, my biggest concern stands: the framework is not able to capture the heterogeneous situation, which I believe is a very important aspect to consider in practice. Although I understand that mixed NE does allow players to take different actions, the insight that everybody eventually converges to the same mixed strategy seems unrealistic and overly simplified to me.
However, despite the concern I have, I do appreciate the novelty of the model and the acute analysis. That said, I believe this work does have the potential to serve as a nice starting point to study the specialization effect of producer competition. Therefore, I decide to raise my score from 5 to 6. | Summary: In this paper, the authors investigate the supply-side equilibria in personalized content recommender systems. They propose a game-theoretic model that captures the multi-dimensional decisions of producers and the heterogeneous preferences of users. They analyze the conditions for specialization to occur and the impact of specialization on market competitiveness. The paper provides insights into how recommender systems shape the diversity and quality of content created by producers.
Strengths: (1) The paper addresses an important and timely topic – the impact of recommender systems on the supply side of the digital goods market. It sheds light on how producers make decisions to maximize their appearance in recommendations, and how this affects the diversity and competitiveness of the marketplace.
(2) The proposed game-theoretic model is well-designed and captures the multi-dimensional decision space of producers and the heterogeneity of user preferences. This model allows for a nuanced analysis of specialization and its consequences in recommender systems.
(3) The paper provides rigorous theoretical analysis, deriving necessary and sufficient conditions for specialization to occur. It also presents concrete settings with two populations of users to characterize the distribution of content at equilibrium.
Weaknesses: (1) The implementation details of the proposed model are not provided. It would be helpful for readers to have a clear understanding of how the model can be replicated and reproduced.
(2) The evaluation of the proposed model is limited. The paper does not compare the results with any existing baselines or alternative approaches. It would be valuable to see how the proposed model performs compared to other methods in the field.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: My questions are mentioned above. You can provide the feedback in the rebuttal phase.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and respond to their questions below. We present new results that address some of their concerns: in particular, a **new empirical analysis on the MovieLens-100K dataset** that validates and goes beyond our theoretical findings (see the General Response).
**“The evaluation of the proposed model is limited.”**
We provide a new empirical analysis of supply-side equilibria of nonnegative matrix factorization on the MovieLens-100K dataset. These experiments beyond qualitative insights that validate our theoretical results and also provide intuition for the structure of supply-side equilibria that goes beyond our theoretical results.
- We compute the direction of the single-genre equilibrium embedding (using Corollary 5) for several different cost functions (see Figures N1-N2 in the General Response pdf). (Recall that this direction is equal to the genre chosen by producers when there is no specialization at equilibrium.) Figures N1-N2 confirm our theoretical findings that the location of the genre does not seem to permit a clean closed-form characterization (Corollary 5) and depends on subtleties of the cost function. Moreover, Figures N1-N2 go beyond our theoretical results to illustrate that the genre realized in a marketplace can be heavily influenced by some (but not other) aspects of producer costs.
- We compute the boundary where specialization starts to occur, using the upper bound from Corollary 4 (see Figure N3 in the General Response pdf). As $D$ increases, specialization is more likely to occur in a given marketplace. The intuition is that as $D$ increases, the user embeddings become more heterogeneous, which increases the likelihood of specialization, as suggested by our theoretical results in Section 3. Figure N3 further indicates that the platform may influence the level of specialization by setting the embedding dimension $D$ to be higher or lower. We also show how whether specialization occurs subtly depends on the cost function structure, which also aligns with our theoretical results from Section 3.
We refer the reviewer to the General Response for details of the empirical setup and results.
**“The implementation details of the proposed model are not provided. It would be helpful for readers to have a clear understanding of how the model can be replicated and reproduced.”**
As a primarily theoretical contribution (except for the new experiments on the MovieLens dataset), we believe that we have fully specified our model in Section 2, and included all the theoretical proofs in the attached supplement. As such, we believe that our results are fully replicable and verifiable by readers–if there is a particular detail that is unclear, we would appreciate if the reviewer could point it out so we can clarify it.
**“The paper does not compare the results with any existing baselines or alternative approaches. It would be valuable to see how the proposed model performs compared to other methods in the field.”**
It is not clear what “baselines” would mean in our model. Our goal in this paper is not an algorithm or a method. Rather, as is typical with many papers in the economics and machine learning literature, we develop a mathematical model and analyze the model to provide economic insights about real-world marketplaces. Our specific goal in this paper is to study content creator incentives in recommender systems, and in particular, the economic phenomena of specialization. For more context about the style of our contribution, we refer the reviewer to the “Nature of contribution” section of the General Response.
---
Rebuttal Comment 1.1:
Comment: Reviewer ham5: thank you again for your review. We wanted to bring your attention to **our new empirical analysis on the MovieLens dataset in the rebuttal**, which we believe might address your concerns about the lack of empirical validation of our results. We also respond to your other questions in the rebuttal. If you have follow-up questions, we are happy to further clarify any aspect of the paper or the rebuttal in the remaining few days of the discussion period. | Rebuttal 1:
Rebuttal: Thanks to the reviewers for their feedback. We provide a **new empirical analysis of our theoretical findings on the MovieLens-100K dataset**. We then clarify the nature of our contribution of proposing and analyzing a mathematical model to study an economic phenomenon. (We respond individually to the reviewers below.)
## Empirical analysis on MovieLens dataset
Several reviewers asked about an empirical analysis of our theoretical findings on real-world datasets. We provide an empirical analysis of supply-side equilibria using the MovieLens-100K dataset and recommendations based on nonnegative matrix factorization (NMF). These experiments provide qualitative insights that validate our theoretical results and provide intuition going beyond our theoretical results.
We construct user embeddings of dimension $D = 2, 3, 5, 10, 50$ by running NMF with $D$ factors. We consider two families of producer cost functions:
- (C1) Let $\alpha \in R_{\ge 0}^D$ be a weight vector such that $\sum_{i=1}^D \alpha_i = 1$. The cost function $c_{\alpha}(p) := ||[p_1 \cdot \alpha_1, \ldots, p_D \cdot \alpha_D]||^{\beta}$ captures that certain dimensions might be cheaper vs. more expensive to improve along.
- (C2) Let $q \ge 1$ and let $c_q(p) := ||p||_q^{\beta}$.
**Single-genre equilibrium direction**: We compute the direction of the single-genre equilibrium embedding (using Corollary 5) for cost functions in (C1) (see Figure N1 in pdf) and in (C2) (Figure N2 in pdf). (Recall that this direction is equal to the genre chosen by producers when there is no specialization at equilibrium.) We observe the following:
- For (C1), the genre varies significantly with the weights $\alpha$ (see Figure N1). The magnitude of the genre coordinate is higher along the cheaper dimension.
- For (C2), the genre does not change significantly with the norm parameter $q$ (see Figure N2).
- In both cases, the genre typically does not coincide with the arithmetic mean of the users.
Figures N1-N2 align with our theoretical findings that the genre location doesn’t permit a clean closed-form characterization (Corollary 5) and depends on subtleties of the cost function. Figures N1-N2 also go beyond our theoretical results to show that the genre can be heavily influenced by some (but not other) aspects of producer costs.
**Boundary where specialization starts to occur**: We investigate the cost function exponent $\beta^*$ where specialization starts to occur (line 224, Theorem 1). We compute an upper bound $\beta^u$ on $\beta^*$ (using Corollary 4) for cost functions in (C1) and different embedding dimensions $D$ (Figure N3). We observe the following:
- As $D$ increases, the value of $\beta^u$ *decreases*, and specialization is more likely to occur (see Figure N3). The intuition is that increasing $D$ increases the heterogeneity of user embeddings, which increases the likelihood of specialization as suggested by our theoretical results in Section 3. Figure N3 suggests that the platform may influence the level of specialization by tuning $D$.
- As the cost function parameter $q$ increases, the value of $\beta^u$ *increases* and specialization is less likely to occur (see Figure N3). This confirms our theoretical insights about the subtle role of the cost function in whether specialization occurs.
**Details of empirical setup**: We use the MovieLens 100K dataset which consists of 943 users, 1682 movies, and 100,000 ratings. To obtain $D$-dimensional user embeddings, we ran NMF (with $D$ factors) using the scikit-surprise library on the full dataset. For Figure N3, we directly calculate $\beta^u$ using Corollary 4. For Figures N1-N2, we numerically solve the optimization program in Corollary 5: we directly solve this using CVXPY for (C1), and we use projected gradient descent with step size 1.0 over 100 iterations with projection done with CVXPY for (C2).
We will use the extra page in the final version to include the details of this empirical analysis.
## Nature of contribution: Proposing and analyzing a stylized model
Some of the reviewers asked for comparison against baselines or asked questions about the stylized nature of our model. We would like to clarify the nature and style of our contribution.
**Proposing and analyzing a mathematical model:** Our contribution is to develop a mathematical model and analyze the model to provide economic insights about real-world marketplaces. As a result, we do not design an algorithm or a method, but rather analyze *behavior* within the mathematical model that we propose (i.e., *specialization* within a model for content creator competition).
Proposing and analyzing behavior within a mathematical model is standard in economics and machine learning, machine learning theory, and the societal aspects of machine learning more broadly (e.g. [A], [B], [C]). These subfields are listed in the NeurIPS call for papers: Theory (algorithmic game theory) and Social and economic aspects of machine learning (strategic behavior).
[A] Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. “Delayed Impact of Fair Machine Learning”. ICML 2018 Best Paper.
[B] Simon Zhuang and Dylan Hadfield-Menell. “Consequences of Misaligned AI.” NeurIPS 2020.
[C] Kate Donahue and Jon Kleinberg. “Optimality and Stability in Federated Learning: A Game-Theoretic Approach.” NeurIPS 2021.
**Stylized nature of our model:** We don’t think that incorporating all of the complexities of real-world recommender systems into our model would have improved our analysis of specialization. If a model has too many changing components and degrees of freedom, it is hard to pinpoint exactly which component is responsible for a given phenomena. In general, complex models are often less robust, whereas results from simple models tend to at least qualitatively generalize beyond their assumptions. Our stylized model is one of the simplest that enables us to study specialization (see Appendix A.1).
Pdf: /pdf/fc1f7b33f3a6f22095e2e772255b41d5ece002c5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding and Improving Feature Learning for Out-of-Distribution Generalization | Accept (poster) | Summary: The paper studies OOD generalization in the presence of spurious correlation. First, it provides a theoretical analysis showing that during ERM training, both spurious and invariant features are learned but at different rates, which, in turn, influences the performance of the following optimization with OOD objectives, such as IRM. Then, it leverages the theoretical analysis to propose a new feature pre-training method to facilitate learning a more diverse set of features. The empirical evaluation shows that the proposed method improves standard ERM pre-training and other feature pre-training baselines in various settings.
Strengths: The paper provides a good combination of theoretical and empirical studies. Both parts are well-motivated, sound and offer a valuable contribution to the field:
- The theoretical results deepen our understanding of how ERM learns different features and how this learning schedule interacts with further OOD training and generalization.
- The proposed method, motivated by the theoretical analysis, is an efficient and effective way of improving OOD performance, as shown on several datasets.
Weaknesses: - Discussions provided after each formal statement helps understand their implications. However, the overall clarity and presentation of the paper could be improved to be more friendly for a more general audience. For example:
- Fig. 2 – more details can be presented in the caption, e.g., it is unclear what the “Feature Learning” axis means. Are those coefficients introduced in Lemma 3.2?
- It is not entirely clear what “OOD objectives” mean before L122.
- The description of the FAT method could be more elaborate. For example, the description of the retention mechanism via saving previous linear layers could be explained more explicitly.
- The theoretical setting and proofs use the linear activation function, effectively rendering the model linear. While the authors mention in the main text that this framework can be extended to ReLU activation functions, it would be good to provide more details in the proof of where and how the authors think the referred works (L119) could be applied to extend their results to non-linear networks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - According to the provided theory, pre-training features with ERM for longer seems always beneficial. However, as shown in Fig.1 left and mentioned in experiments (e.g., L306-307), overfitting can occur, and OOD performance decreases with further pre-training. Does your theory also explain this phenomenon?
- It was unclear why the memory cost was too significant to switch to iFAT. Couldn’t the subsets be stored as masks, which would not increase the memory costs considerably, given that K equals 2 or 3 in practice? The additional parameters are the linear layers w_i, which also should not contribute much to the memory cost.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - As mentioned in the weaknesses, the presented proofs are limited to linear models. Additional work is needed to extend these results to the case of non-linear CNN.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time in reviewing our paper and your positive feedback! We hope our response below would make you more confident in supporting our work.
> W1.1 Axis in Fig.2.
We have revised the caption of Fig.2 to include the details. As mentioned in Appendix C.1, the invariant and spurious feature learning terms plotted in Fig. 2 are the means of $\langle \mathbf{w}\_{j,r}, j\mathbf{v}\_1 \rangle$ and $\langle \mathbf{w}\_{j,r}, j\mathbf{v}\_2 \rangle$ for $j\in \lbrace\pm 1\rbrace, r\in[m]$, respectively.
> W1.2 OOD objective.
We have revised our draft to make it clearer: we refer OOD objective as the additional penalty terms developed to regularize ERM to capture the invariant features, as introduced in line 29.
> W1.3 Retention mechanism.
We have revised our work to explicitly describe the retention mechanism: FAT needs to retain the already learned features by minimizing the empirical risk at $G^r$, for which we store and use the historical classifiers $w_i$ with the current featurizer to evaluate the feature retention degree.
> W2 Extending results to non-linear.
Here we showcase how our two key theorems can be extended to non-linear settings:
- For the ERM feature learning (Thm 4.1), we have updated our draft to include the analysis for non-linear activation functions. The result is that for smooth or piecewise linear activation functions (ReLU, Leaky ReLU, Softplus, etc.), suppose we run $T = O(d\log{d})$ iterations of GD for the ERM objective (in the early stage of training), the feature learning terms (e.g., $\sigma(\langle \mathbf{w}\_{j,r}, j\mathbf{v}\_1 \rangle)$) will be proportional to the empirical distributions of the signal strengths (e.g., $\textup{Rad}(\alpha)$) up to an error of $O(d^{-\Omega(\zeta)})$ with $\zeta\in (0, \frac{1}{4})$ being a fixed constant. The proof ideas are summarized as follows: We adapt the results in [A] (Thm 3.2), which shows that in the early stage of GD training, for a network that is sufficiently wide (NTK region), its output can be well approximated by a linear model, and the approximation error scales as $O(d^{-\Omega(\zeta)})$. Based on this approximation, we extended our analysis in the linear case to allow non-linear activation functions. We observe a weaker dominance of the spurious feature in the non-linear case both theoretically and empirically.
- For IRM feature learning (Thm 4.2), we have updated our draft to include the analysis for non-linear activation functions. To account for non-linear activation, we introduce an additional assumption, which requires that non-linear activation is Lipschitz and Smooth: activation function is smooth, $\psi'(0) \le \beta$, $|\psi'(x)-\psi'(x')|<\beta|x - x'|$ and Lipschitz $|\psi(0)| < L $, $|\psi(x)-\psi(x')|< L |x - x'|$. Based on the additional assumption, we can extend our analysis in the linear case to allow non-linear activation functions.
> Q1 ERM pre-training for longer.
We need to clarify that **our theory does not imply that longer ERM pre-training would benefit the feature learning** while suggesting the ERM learns both invariant and spurious feature learning till convergence. In other words, ERM feature learning will **saturate** after a certain number of steps, which aligns with the empirical evidence in Fig. 10 of [25].
The performance decrease of ERM feature learning in Fig. 1b implies one of the ERM feature learning drawbacks, which may be because of the “simplicity bias” that ERM tends to learn simple functions [50]. Our additional analysis in the rebuttal pdf also suggests that ERM can **forget** certain useful features.
There are more other factors that could influence feature learning, such as network architectures and optimization algorithms. As the first work that theoretically characterizes feature learning of ERM and OOD objectives, incorporating all of those factors could unnecessarily improve the difficulty of the analysis. Nevertheless, as discussed in Appendix A, we believe it is a promising future extension of our theory.
> Q2 memory cost
We need to clarify that FAT needs to store all $D^a_i$, $D^r_i$ and $w_i$ yielded in previous rounds. The storage of these items does not cause much memory issue, while **training using FAT objective (Eq.7) with all previous subsets can lead to OOM for a large network**, as Eq. 7 increases the batch size by a factor of $k$ for the $k$-th round. Note that typically, the batches are also sampled from each environment as shown in Table 9. Let the number of sampled domains be $d$, and the batch size from each domain be $b$, then each round will additionally introduce $2d\times b$ samples in each minibatch, which is typically more than 200 and leads to OOM issue for a V100 GPU with around 32G GPU memory.
We also show the running time and memory cost of different methods in the "global" response.
**References**
[25] On feature learning in the presence of spurious correlations, NeurIPS'22.
[50] The pitfalls of simplicity bias in neural networks, NeurIPS'20.
[A] The surprising simplicity of the early-time learning dynamics of neural networks, NeurIPS’20.
We are happy to answer any outstanding questions, and we’d appreciate it if you could jointly consider our responses when making the final evaluation of our work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response clarifying my comments and questions. I increase my confidence and the presentation score assuming the mentioned clarifications will make it to the camera-ready version.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for checking our rebuttal. We're glad that our responses are able to address your comments and questions. If you have any other comments, feel free to let us know. Thank you again for your valuable comments and suggestions. | Summary: The paper consists of two parts. The first part provides a theoretical analysis of the training dynamics for a simple model and data distribution under ERM and IRM. Specifically, the authors explore the questions of feature learning, when one of the features changes its correlation to the target between environments (spurious feature) and the other does not (core feature). The second part proposes a training method for extracting reach features, and shows promising results across an array of benchmarks.
Strengths: **S1.** The paper provides an extensive theoretical analysis of gradient descent dynamics under ERM and IRMv1 for a specific model and data distribution.
**S2.** The paper proposes a novel training method (FAT) which shows strong performance in terms of feature learning
Weaknesses: ### W1: Theory
The paper places a lot of emphasis on the theoretical analysis, which occupies pages 3-6. The authors use the theoretical results to provide intuition about feature learning in neural networks, and training dynamics of ERM and IRM.
Unfortunately, the model used is extremely simple. While the authors argue that it is a convolutional neural network, it is in fact **a linear model**. Indeed, the authors define the model as
$f(W, x) = F_{+1}(W_{+1}, x) - F_{-1}(W_{-1}, x)$, where
$F_j(W_j, x) = \frac 1 m \sum_{r=1}^m [w_{j, r}^T x_1 + w_{j, r}^T x_2]$, where I omitted the activation $\sigma$, which the authors set to be the identity mapping.
We can then rewrite the model as a linear model
$f(W, x) = \frac 1 m \sum_{r=1}^m [w_{+1, r} - w_{-1, r}]^T (x_1 + x_2) = \tilde w^T \tilde x$, where $\tilde w = \frac 1 m \sum_{r=1}^m [w_{+1, r} - w_{-1, r}] $ is the effective weight vector and $\tilde x = x_1 + x_2$ is the effective feature vector.
In other words, the "CNN" model used by the authors is simply a reparameterized linear model.
Moreover, the reparameterization does not really change the gradient dynamics in a non-trivial way, as
$\frac {\partial L}{\partial w_{j, r}} = \frac {\partial L}{\partial \tilde w} \frac {\partial \tilde w} {\partial w_{j, r}} = \frac {j}{m} \nabla_{\tilde w} L$.
In other words, all of the weights $w_{j, r}$ are updated with $\pm$ the gradient of the linear model divided by $m$.
So **understanding the training dynamics in the "CNN" model is equivalent to understanding the training dynamics of a linear model**.
To summarize, the authors study the training dynamics of linear model with a logistic loss.
The authors do not clarify this connection in the paper, which in my opinion is misleading. This omission also unnecessarily complicates the presentation.
I also believe there are several important implications of the fact that the authors analyze the training dynamics of a linear model:
- It is not clear what is even meant by feature learning. Specifically, the authors refer to how much weight the model assigns to each of the input features. However, this is more similar to last layer / classifier training in the context of [1] that the authors reference, rather than feature learning.
- Consequently, it's unclear what conclusions can even be made from the experiments about _feature learning_ in _neural networks_. The connection here seems far-fetched.
- The setup for the IRM is quite strange. Specifically, the feature that the model outputs in this context is the logit (a number) predicted by the model, and the classifier is just fixed to be $1$. As a result, the authors get a logistic regression model with some additional gradient penalty. I am not an expert on IRM, but it is not clear how relevant this model is to _feature learning_ in neural networks with IRM.
Please correct me if I am wrong in the reasoning above!
### W2: Theory $\leftrightarrow$ Methodology
The connection between the theory and the proposed method (FAT) is not clear to me.
It seems like the main conclusion from the theory comes down to the idea that we need to learn diverse features, which ERM by itself might not do.
However, as I mentioned above, the relevance of the theory to feature learning in neural networks is questionable.
In particular, the theory does not connect as far as I can tell to any of the details of the method.
The method could very easily be presented without the theory.
### W3: Presentation
Because so much emphasis is placed on the theoretical results (which are in my opinion of limited relevance), the authors have to describe the method and the experiments in a limited space.
The presentation of the method is not very clear.
In particular, many of the design decisions are not explained.
For example, why do we need to reinitialize the weights of the classifier in each "round"?
The datasets $\mathcal{D_i^a}$ are never defined in the text, but used in line 262.
Overall, it is quite hard to follow the description of the algorithm, and the intuition behind it.
Moreover, the iFAT method which is actually used in all of the experiments is not even described in the paper.
I would recommend to deemphasize the theory, and use most of the space in the paper to clearly describe the method, present detailed experimental results and ablations on the various design choices.
### W4: Performance
Overall, FAT seems to consistently provide good results in Table 2. However, it's worth noting that the improvements over ERM appear to be fairly small (<1%) except for Camelyon17.
**References**
[1] [_Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations_](https://openreview.net/forum?id=Zb6c8A-Fghk);
P. Kirichenko, P. Izmailov, A. G. Wilson;
ICLR 2023
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: **Q1.** In Eq. 5, why is the second term divided by $n_e^2$ and not $n_e$?
**Q2.** You mention in line 279 that FAT comes with additional memory requirements, especially if the feature extractor has many parameters. Why is that? What exactly do you need to store?
**Q3.** How exactly do you run DFR on CivilComments and Camelyon17? The results, even for ERM+DFR, appear to be surprisingly good, and in particular better than what's currently reported in the [WILDS leaderboard](https://wilds.stanford.edu/leaderboard/)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: The limitations are adequately addressed in my opinion, except for the issues raised in the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time in reviewing our work. Please see our detailed responses to your comments and suggestions below where we use references in our draft due to the token limit.
> W1.1 The linear activation in the CNN model
We respectfully disagree with the point:
- First, we need to clarify that, as the first work analyzing the ERM and IRMv1 feature learning under distribution shifts, the results with IRM regularization (which is **non-linear**) we obtained are not obvious under **non-convexity** [a] despite the linear activation. In fact, The assumption of linear activation is widely used and standard in the literature of theory analysis in OOD generalization [4,32,40,61,64]. Linear CNN is also widely studied by the community [b,c,d].
- Besides, **we also show that our key theoretical results could be generalized to the non-linear setting**. Please find the details in [our response to Reviewer qcQH](https://openreview.net/forum?id=eozEoAtjG8¬eId=KmRvdQIZ2C) due to the character limits.
- We adopt the linearity activation function primarily for the sake of simplicity and clarity when studying IRMv1. It's worth noting that the IRMv1 involves high-order derivatives, which can be a significant challenge to learning theory.
- Going beyond the linearity of activations, the experiments in Fig. 1b show that, our theoretical results align with the empirical discoveries in more complex settings.
> W1.2 The meaning of feature learning
We need to clarify that feature learning in neural networks refers to how the weights of a neural network **evolve to extract different features (Eq.6)**, especially when trained from scratch [2,11,51,58]. **The objectives are not limited to ERM**.
The last layer training method is applied to the **fixed trained features** and is limited to ERM, which serves as an indirect measure of feature learning for deep and complex networks where explicit quantities in Eq. 6 are not accessible.
Another key difference is that our analysis studies training a neural network based on **dataset containing spurious correlations (aligned with the setting in OOD generalization literature)**, while last layer training requires an **unbiased dataset** (i.e., without spurious correlations) to (indirectly) examine the invariant feature learning.
> W1.3 The setup for IRM
The use of scalar in IRMv1 is because of the complicated formulation of the original IRM framework which involves a bilevel optimization (more details can be found in [13]). The gradient penalty in IRMv1 is **a practical variant to regularize** the feature learning in deep networks to focus on invariant features, and has gained lots of success[4].
To avoid misunderstanding, we removed the sentence “defined classifier w as the scalar 1” in the draft. We understand that Reviewer Jcjd may not be familiar with the literature of OOD generalization and IRM, nevertheless, we are happy to provide more details for any specific points unclear to Reviewer Jcjd.
> W2 Theory and method.
As clarified, our theory precisely analyzes the feature learning of a CNN when optimized via ERM and IRMv1, and characterizes the functionalities of ERM and OOD objectives. Specifically, the second part of our theory implies that IRMv1 solely can not learn features but requires high quality feature representations for OOD generalization, hence the pre-training stage needs to learn high quality features. As pre-training with ERM may not learn all useful features for OOD generalization, our algorithm is thus motivated to strengthen the pre-training stage for better OOD generalization.
**Without our theory, it’s unclear which stage needs to be improved and what objectives need to be used for improving OOD generalization.**
> W3 Presentation
As shown in Eq. 7, we need to initialize (instead of “reinitialize”) a new classifier in each round to learn new features from the augmentation set while keeping the historical classifiers to retain the features already learned in each previous round.
We complemented the introduction of $D^a_i, D^r_i$ in line 262: $D_i^a$ and $D_i^r$ are the corresponding augmentation and retention set elicited at $i$-th round.
We also gave specific details in the revised paper, about the differences between iFAT and FAT, that iFAT stores only $D_{k-1}^a, D_{k-1}^r, w_{k-1}$ at the $k$-th round.
> W4 Performance
We need to clarify that a large improvement in these real-world datasets is extremely more challenging than other ML tasks, as one may see in the Wilds leaderboard. The consistent improvements of FAT can serve as strong evidence for its superiority.
> Q1 Eq 5
$L_e$ is the averaged ERM risk among $n_e$ samples in environment $e$. The IRMv1 penalty (Eq.4) takes the square of the gradients wrt. $L_e$ and thus $1/n_e$ in $L_e$ is $1/n_e^2$ in Eq. 5.
> Q2 Memory cost
FAT needs to store all $D^a_i$, $D^r_i$ and $w_i$ from previous rounds. The storage of these items does not cost much memory, while training using FAT objective (Eq.7) with all previous subsets can lead to OOM for a large network, as Eq. 7 increases the batch size by a factor of $k$ for the $k$-th round.
> Q3 DFR results
Note that the results are aligned with DFR [28] and its seminal works [25,46]. We strictly follow their protocol with details provided in line 325 and Appendix E.2.2, that we use an unbiased subset to train the last layer with frozen ERM learned features. The results demonstrate ERM already learns invariant features, aligned with our Thm 4.1.
**References**
[a] Deep linear networks with arbitrary loss: All local minima are global, ICML’18.
[b] Implicit bias of gradient descent on linear convolutional networks, NeurIPS’18.
[c] Representation costs of linear neural networks: Analysis and design, NeurIPS’21.
[d] Inductive bias of multi-channel linear convolutional networks with bounded weight norm, COLT’22.
---
Rebuttal 2:
Title: A summary of rebuttal in response to Reviewer Jcjd
Comment: Dear Reviewer Jcjd,
We want to thank you again for reviewing our paper. We have responded to each of the weaknesses/questions you raised in the review. In summary,
- We have clarified the misconception in your reasoning about the linear activation function, and showed that our results could be generalized to the non-linear activation function;
- We have shown how our theoretical results could motivate the proposed method;
- We have improved the paper presentation following your suggestions.
We would appreciate it if you could take a look at our responses and let us know if any of your remaining concerns are not addressed, and we would try our best to address them.
---
Rebuttal 3:
Title: A gentle reminder for the closing rebuttal window
Comment: Dear Reviewer Jcjd,
We would like to remind you that the rebuttal will be closed very soon. To allow us sufficient time to discuss with you your concerns about our work, we would appreciate it if you could take some time to read our rebuttal and give us some feedback. Thank you very much.
---
Rebuttal Comment 3.1:
Title: Updated the score
Comment: Dear authors, thank you for the rebuttal. Based on the clarifications and the new results, I have updated the score to a borderline accept. I believe some of my concerns still hold, such as the limitations of the theory, and limited empirical improvements, but I am not opposed to accepting the paper.
---
Reply to Comment 3.1.1:
Title: Thank you for checking our rebuttal
Comment: Dear Reviewer Jcjd,
Thank you very much for checking our rebuttal. We are glad that our clarifications and new results changed your opinion about our work. We understand there are still limitations in our theory and method, but as the first work to characterize the feature learning of ERM and OOD objectives and their interactions under distribution shifts, we believe our work could facilitate the understanding of feature learning under distribution shifts and lay the foundation for future work on representation learning. Future extensions could be generalizing our theory to more complex networks and objectives, and improving the data partitioning and feature learning in our method to better tackle the challenging real-world OOD generalization problem. | Summary: This work aims to understand and compare feature learning in ERM and certain OOD generalization objectives. Additionally, it proposes an approach to enhance feature learning for improved OOD generalization.
First, the authors examine data consisting of invariant and spurious features. They theoretically show that when training a two-layer CNN with ERM, it learns both the features. However, when the spurious correlation in stronger, the spurious features are learned faster. Additionally, they explore the IRMv1 objective and illustrate that fine-tuning the ERM-trained model using the IRMv1 objective does not result in learning of new features.
To improve feature learning compared to ERM, the authors propose a technique called feature augmented training (FAT). At each training step, they partition the data into two sets based on the correctness of predictions. The set with accurate predictions is added to an augmentation set, while the set with incorrect predictions is added to a retention set. Both sets expand with each training round. The training objective involves applying Distributionally Robust Optimization (DRO) on the augmentation set to learn new features, combined with ERM on the retention set to retain the learned features.
The authors demonstrate that using OOD objectives on models trained with FAT leads to improvement in OOD performance on two variants of the colored MNIST dataset and six datasets from the WILDS benchmark.
Strengths: 1. This work aims to improve feature learning for better OOD generalization, which has been identified as a limitation of existing OOD objectives in recent research [1]. While the authors acknowledge that the idea to do this is not new [2], the proposed approach FAT is novel and demonstrates effectiveness in enhancing OOD performance. FAT incorporates a termination step to halt further training once the model has acquired sufficiently rich features, and the authors also propose an efficient version, iFAT, to make the approach practical for implementation.
2. The theoretical results presented in this work provide intriguing insights into feature learning with ERM and the observation [1] that OOD generalization objectives do not improve feature learning.
3. In their comparisons, the authors extensively evaluate ERM and Bonsai [2]—the primary contenders for feature learning—as well as various OOD objectives applied on top of these methods, across multiple datasets (as shown in Table 2). The results demonstrate that FAT enhances OOD performance.
Weaknesses: 1. The claim that ERM does not learn sufficiently good feature representations, while FAT improves them, requires stronger support and evidence.
2. Some sections of the writing require clarifications to improve understanding and readability.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Improvement in feature learning compared to ERM:
- In Section 6, it is mentioned that FAT, Bonsai [2] and ERM are generally trained for the same number of overall epochs. However, given that ERM is computationally less expensive than the other two methods, it would be valuable to investigate whether training ERM for a longer duration can further enhance the learned feature representations, or if the representations stop improving after a certain number of training epochs. This analysis would provide insight into the extent to which FAT and Bonsai truly improve feature learning, especially considering that they do not consistently lead to significant improvements in OOD performance in many cases (as observed in Table 2).
- To better understand and compare feature learning in ERM and FAT, it would be beneficial to include additional analyses, such as visualizing the saliency maps using GradCAM.
2. Clarity:
- In the caption for Fig. 1, the description of FAT is unclear and it does not adequately explain the illustration of FAT in Fig. 1(a). Although the experimental results in Figure 1(b) provide informative insights, it would be more comprehensive to include details about the dataset and the number of training epochs used for FAT.
- The description of the method in the introduction (lines 55-58) is unclear and difficult to comprehend without referencing Algorithm 1.
- Concerning the algorithm, the rationale behind Line 16 (returning the average of all linear classifiers) is unclear and could benefit from further explanation.
3. Minor:
- Please address any typos, e.g., in the Fig. 1 caption, and inconsistencies in notation, e.g., in Equation (5), Lemma 3.2, etc.
- It may be worth considering renaming the method from FAT to FeAT, as it better reflects the focus of the approach on improved feature learning.
References:
[1] P. Kirichenko, P. Izmailov, and A. G. Wilson. Last layer re-training is sufficient for robustness to spurious correlations. arXiv preprint arXiv:2204.02937, 2022.
[2] J. Zhang, D. Lopez-Paz, and L. Bottou. Rich feature construction for the optimization-generalization dilemma. In International Conference on Machine Learning, pages 26397–26411, 529 2022.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations:
[-] Reproducibility: The authors have not provided the code for implementing their approach. While the Appendix contains most of the necessary implementation details, sharing the code would greatly facilitate the verification of their findings.
[+] The authors acknowledge the limitation that feature learning is influenced by various factors, and this work specifically aims to investigate feature learning in the context of spurious and invariant features.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your support and constructive comments! Please see our detailed responses to your comments and suggestions below where we use reference numbers in our manuscript due to the character limit:
> Q1.1 The effects of longer ERM training epoch.
Thank you for the insightful question. **In fact, the representations of ERM will stop improving after a certain number of ERM training epochs, shown in Fig. 10 of [25], or even decrease when being fed to OOD training for some datasets (Fig.1b)**, which highlights a drawback of ERM feature learning. Note that for deep models, the feature learning will quickly saturate within one epoch [25], within the training epochs we use (in Table 10). Therefore, the consistent improvements by FAT are non-trivial.
> Q1.2 GradCAM
We followed your suggestion and plotted the saliency maps to better understand the feature learning of different algorithms. The figures are attached to the rebuttal pdf and to our latest draft as well.
We first visualize the feature learning of ERM and FAT on ColoredMNIST-025. It can be found that
- ERM can learn both invariant and spurious features to predict the label, aligned with our theory.
- However, ERM focuses more on spurious features and even **forgets** certain features with longer training epochs, which could be due to multiple reasons such as the simplicity biases of ERM[a]. Hence predictions based on ERM learned features fail to generalize to OOD examples.
- In contrast, FAT effectively captures the meaningful features for all samples and generalizes to OOD examples well.
We also visualize the saliency maps of ERM, Bonsai and FAT on all real-world datasets used in our work. It can be found that, across various tasks and data modalities, FAT effectively learns more meaningful and diverse features than ERM and Bonsai, which serve as strong evidence for the consistent superiority of FAT in OOD generalization.
> Q2.1 Caption of Fig.1,
We included more details when introducing FAT:
FAT iteratively checks and divides $D_{tr}$ into augmentation $D^a$ and retention sets $D^r$ that contain features not learned and already learned by the current model at the round, respectively. Then FAT augments the model with new features while retaining the already learned features, which leads to richer features for OOD training and better OOD performance.
We also included the dataset and epochs/rounds used in FAT when introducing the experiments of Fig.1.
> Q2.2 Introduction
We revised the introduction with a more intuitive explanation:
In each round, FAT separates the train set into two subsets according to whether the underlying features in each set are already learned (Retention set $D^r$) or not (Augmentation set $D^a$), by examining whether the model yields correct ($D^r$) or incorrect ($D^a$) predictions for samples from the subsets, respectively. Intuitively, $D^a$ and $D^r$ will contain distinct features that are separated in different rounds. Then, FAT performs distributionally robust optimization (DRO) on all subsets, which *augments* the model to learn new features by minimizing ERM losses on all $D^a$ and *retains* the already learned features by retaining ERM losses on all $D^r$.
> Q2.3 Explanation of the algorithm return
We revised the paper to add the explanation. The rationale is that the average of historical classifiers could be a good initial point as they already capitalize all the learned features in each round.
> Q3.1 Typos
We have double checked and corrected all the typos and inconsistencies.
> Q3.2 A new name
Thanks for this wonderful suggestion! We switched the name to FeAT in the revised draft! Nevertheless, to avoid confusion to other reviewers, we will use FAT for consistency during rebuttal.
> L1 Reproducibilty
As claimed “ We will provide a link to an anonymous repository of our code during the discussion phase.” during submission, we have provided the code in an anonymized link to AC, following the rebuttal guideline that we are not allowed to disclosure any external link in the rebuttal.
**References**
[a] The pitfalls of simplicity bias in neural networks, NeurIPS'20.
Please let us know if you have any further questions. We’d be grateful if you could take the above responses into consideration when making the final evaluation of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. Many of my concerns have been addressed. Additional comments/questions:
1. Regarding saliency maps shown in the PDF: Thank you for including these visualizations for several datasets.
- For Fig. 1, what is the color scale for grey-colored GradCAM and GradCAM visualizations? It is hard to interpret the visualizations without the color scale.
- For Fig. 2, it would be useful to discuss what features are useful and task-relevant for the image datasets. I can see differences in the saliency maps for images from FMoW and iWildCam datasets across the three methods, whereas the differences for images from Camelyon17 and RxRx1 datasets are not very clear.
- It would be useful to include more examples for datasets from the WILDS benchmark in the revised version. I also suggest including the predicted and ground truth labels for these examples for better understanding. E.g., in the current example from CivilComments, while FAT relies less on some demographic attributes compared to ERM, all methods seem to rely highly on the word *ridiculous*, which is relevant to the task.
- There are some typos in the captions that should be corrected.
2. Regarding description of the algorithm: I still think that the description of the method in the Fig. 1 caption and introduction of the paper needs further clarity. Consider using separate sentences for the description and the intuition to make it easier to understand. In the current explanation, it is not clear that the augmentation and retention sets grow at every step. Some phrases like *augmenting the model* and *retaining ERM losses* can also be refined.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer NB5i (1/2)
Comment: Thank you for the comments! We are glad to hear that many of your concerns are addressed in our previous response. Let us address your left questions:
> Q1.1 Color scales:
Both of them have a color scale of [0,255], which is scaled up from the originally normalized data in [0,1].
- The grey-colored GradCAM only contains GradCAM information: a whiter color indicates a higher gradcam value.
- The GradCAM converts the grey-colored GradCAM to RGB: a warmer color indicates a higher gradcam value.
Correspondingly, when a region in the grey-colored GradCAM visualization has a whiter color, it has a warmer color in the GradCAM visualization.
We will include the aforementioned details in our caption of the GradCAM visualizations.
> Q1.2 What features are useful and task-relevant in Fig.2.
We agree that it is important to include more details about what the task-relevant features are. Regrettably, the authors are not experts in pathology (tumor tissue detection in Camelyon17) nor biology (genetic treatment effects detection in RxRx1), and could only provide some intuitive discussion about the learned features in Camelyon17 and RxRx1:
- Camelyon17:
- Relevant features: Typically, a pathologist will look into the sizes, shapes and arrangements of the cells and their nucleus to identify the tumor issues. The small nodes indicate normal lymph cells while the cells with large cell nucleus could be macrophage cell that imply the potential infection and the spread of cancer.
- Analysis of the example: At the up left part of the center image, there exist two large cell nucleus, which are identified by ERM and FAT, but failed to be recognized by Bonsai.
- RxRx1:
- Relevant features: The raw data is the image of fluorescent microscopy images of human cells and one typically needs to look into the characteristics of the cells to identify which genetic treatment is used, including the morphology and the distribution of the cells.
- Analysis of the example: For the white regions in the image that imply some morphology cell features, it can be found that Bonsai and ERM can fail to capture full cell features at the up right part.
From the rich feature learning perspective, we need to clarify that **the model is expected to learn all predictive features, no matter whether they are spurious or not**. Typically, both the ERM and Bonsai learned features are part of the predictive features. From the shown examples, it can be found that:
- ERM can fail to capture part of Bonsai learned features;
- Bonsai can fail to capture part of ERM learned features;
- FAT typically captures both ERM and Bonsai learned features;
We believe the visualizations could be strong evidence confirming the better capability in learning rich features of FAT, compared to its counterparts.
> Q1.3 More examples and better understandings.
Yes, due to the page limit, we could only showcase one example randomly selected from each Wilds dataset, and we will visualize more examples, including the original images, the labels, and the predictions in the revised version.
The predictions and labels for the current examples are:
| | Camelyon17 | FMoW | iWildCam | RxRx1 | CivilComments | Amazon |
|--------------|------------|------|----------|-------|---------------|--------|
| Ground Truth | 1 | 40 | 36 | 1138 | toxic | 2 |
| ERM | 1 | 40 | 36 | 812 | toxic | 3 |
| Bonsai | 0 | 40 | 36 | 812 | toxic | 3 |
| FAT | 1 | 40 | 36 | 812 | toxic | 2 |
Note that **the primary objective of feature learning is to capture all predictive features**, while making correct predictions is not the primary objective as we could apply OOD methods such as IRMv1 to identify the invariant features.
For the example of CivilComments, all algorithms make correct predictions, while both ERM and Bonsai could fail to capture sensitive features such as “persecution of jews , intellectuals , communists , social democrats , etc” in the shown example. In contrast, FAT could fully capture the feature.
The same phenomenon is also observed in Amazon example, where ERM and Bonsai could fail to fully capture some relevant features such as “never provides entertainment”, “rather”, “nothing to say”, “too long” and etc..
> Q1.4 Typos in the captions.
We appreciate the carefulness of Reviewer NB5i. We double checked and corrected typos such as “ Saliency map of feature learning”, “the learned features that contributed most”.
---
Reply to Comment 1.1.2:
Title: Response to Reviewer NB5i (2/2)
Comment:
> Q2.1 Caption of Fig.1 and the introduction of the paper.
We further refined the descriptions according to your suggestions, where we use bold fonts to highlight the modified texts:
- Caption of Fig.1: Iteratively, FAT divides $D\_{tr}$ into augmentation $D^a$ and retention sets $D^r$ that contain features not learned and already learned by the current model at the round, respectively. **In each round, FAT augments the model with new features contained in the growing augmentation sets while retaining the already learned features contained in the retention sets, which will lead the model to learn richer features for OOD training and obtain a better OOD performance**.
- Introduction: In each round, FAT separates the train set into two subsets according to whether the underlying features in each set are already learned (Retention set $D^r$) or not (Augmentation set $D^a$), by examining whether the model yields correct ($D^r$) or incorrect ($D^a$) predictions for samples from the subsets, respectively. Intuitively, $D^a$ and $D^r$ will contain distinct features that are separated in different rounds. Then, FAT performs distributionally robust optimization (DRO) on all subsets, which *augments* the model to learn new features by **minimizing the maximal ERM loss on all $D^a$** and ***retains* the already learned features by minimizing ERM losses on all $D^r$.** **Along with the growth of the augmentation and retention sets, FAT is able to learn richer features for OOD training and obtain a better OOD performance**.
We hope our revised version will be clearer and more intuitive to understand. Nevertheless, we are happy to take suggestions to further improve it.
Please let us know if you have any further questions. We’d be grateful if you could take the above responses into consideration when making the final evaluation of our work. | Summary: This paper explores the relationshp between the ERM training and OOD generalization in feature learning. The authors analyze the corresponding learned features by ERM and OOD objectives. To answer the question, they conduct the investigation of feature learning in a two-layer CNN network training with ERM and IRMv1. They adopt the data models proposed by [2, 11], and include features with different correlation degrees to the labels to simulate invariant and spurious features like [26]. The theoretocal results extend [51] from data augmentation to ERM learning process. They find that ERM fails since it learns the spurious features more quickly than invariant features, when spurious correlations are stronger than invariant correlations. However, invariant feature learning also happens with RRM so long as the invariant feature has a non-trivial correlation strength with the labels. Moreovoer, they find that IRMv1 requires sufficiently well-learned features for OOD generalization. Compared with the former workshop version, this conference submission adds the so-called Feature Augmented Training (FAT) to learn features for OOD generalization. The proposed method iteratively augments the model to learn new features while retaining the already learned features. In each round, the retention and augmentation operations are performed distributionally robust optimization on different subsets of the training data that capture distinct features. The experimental results verify the promising OOD generalization performance of the proposed method.
Strengths: 1. Interesting theoretocal findings.
2. Promosing empirical results.
Weaknesses: 1. The symbol system seems hard to follow. Some superscripts and subscripts are redendunt: L_S in Eq.(2), \ell‘ in Eq.(5), o_d in Thm 4.2. Please double-check these symbols and make them clear.
2. Some symbols are not well-defined, such as the \cdot in the predictor f, little o, big O, \Omega, \Theta.
3. The activation function and the variance share the same symbol.
4. Thm 4.1 (informal) does not provide the quantitative results to describe how much the incerment of spurious feature is larger than that of the invariant feature at any iteration. Althrough this is a informal statement, the quantitative results is important.
5. It is better to discuss the computational complexity of the proposed method and compare with the other competitors.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It is better to claim Lma 3.2 in the supplementary material.
2. The details of Fig.2 are imoportant. It should be presented in the main body.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and your positive feedback! Please see our detailed responses to your comments and suggestions below.
> W1 Some superscripts and subscripts are redendunt: L_S in Eq.(2), \ell‘ in Eq.(5), o_d in Thm 4.2. Please double-check these symbols and make them clear.
Thanks for your suggestions. We have revised our draft accordingly:
- We changed $L_S$ to $L$ in Eq.(2);
- We changed $\ell_i’$ to $\ell\_i’^e$ in Eq. (5) referring to the first order derivative of $L_e$ with respect to the $i$-th sample at $e$-th environment.
- By using $o_d$, we mean that the value is small with respect to the dimension $d$. This means that the requirements of the value don't grow with $d$, at least asymptotically.
We also double checked the other notations and simplified the superscripts and subscripts when unnecessary. Besides, we provided a table of key notations at the Appendix for readers easier to follow our notations.
> W2 Some symbols are not well-defined, such as the \cdot in the predictor f, little o, big O, \Omega, \Theta.
We have complemented our full introduction of notations in the Appendix A and provide pointers in the main text. Below is the specific definition for the notations mentioned in the review:
- For $\cdot$ in the predictor $f$, we refer to the product with a vector or a scalar. The use of $\cdot$ in $f$ (Eq. (4)) is because of the relaxation of IRMv1 that relaxes $w$ to be a scalar.
- The other notations are used to compare two sequences $\{ a_n \}$ and $\{b_n \}$. Specifically, we employ standard asymptotic notations such as $O(\cdot)$, $o(\cdot)$, $\Omega(\cdot)$, and $\Theta(\cdot)$ to describe their limiting behavior. Specifically, we write $a_n =O(b_n) $ if there exists a positive real number $C_1$ and a positive integer $N$ such that $|a_n| \le C_1 |b_n| $ for all $n \ge N$. Similarly, we write $a_n = \Omega (b_n)$ if there exists $C_2 > 0$ and $N >0$ such that $|a_n| > C_2 |b_n |$ for all $n \ge N$. We say $a_n = \Theta(b_n)$ if $a_n = O(b_n)$ and $a_n = \Omega(b_n)$. Besides, if $\lim_{n \rightarrow \infty} |a_n/b_n| =0 $, we express this as $a_n = o(b_n)$. We use $\widetilde{O}(\cdot)$, $\widetilde{\Omega}(\cdot)$, and $\widetilde{\Theta}(\cdot)$ to hide logarithmic factors in these notations respectively. Moreover, we denote $a_n = \textrm{poly} (b_n)$ if $a_n = O((b_n)^p)$ for some positive constant $p$ and $a_n = \textrm{polylog}(b_n)$ if $a_n = \textrm{poly}( \log(b_n))$.
> W3 The activation function and the variance share the same symbol.
We have revised our draft and changed the activation function symbol to $\psi$.
> W4 Quantitative results in Thm 4.1 (informal).
The quantitative results of how much the increment of the spurious feature is larger than that of the invariant feature can be found at Eq. (15) in Appendix C.2. This difference is primarily determined by the empirical distributions of $\textup{Rad}(\alpha)$ and $\textup{Rad}(\beta_e)$, reflected in the quantities $\overline{C}_{j\ell} = \sum_e {\frac{1}{n_e} |\lbrace i\mid \textup{Rad}(\alpha)_i = j, \textup{Rad}(\beta_e)_i = \ell, i\in \mathcal{E}_e\rbrace| }$ for $j\in\lbrace \pm 1\rbrace,\ell\in \lbrace\pm 1\rbrace$. This (positive) difference decreases monotonically from Eq. (17) towards 0. We have added more quantitative results in Thm 4.1.
> W5 Computatial complexity.
We have revised the draft to include a discussion of computational complexity in Sec. 5.2:
Compared to ERM, the additional computational and memory overhead introduced in FAT mainly lie in the FAT training and partitioning. At each training step, FAT needs $(k-1)$ additional forward and backward propagation, the same as Bonsai, while FAT only needs $\min(1,k-1)$ additional propagation. Besides, Bonsai additionally require another round of training with $(K-1)$ additional propagation, given $K$ total rounds.
We calculated the computational overhead:
| | Camelyon17 | | CivilComments | |
|--------|-----------------------|----------------|--------------------|----------------|
| | Training time | Memory (%) | Training time | Memory (%) |
| ERM | 56.21$\pm$8.29 mins | 22.56$\pm$0.00 | 24.22$\pm$0.33 hrs | 36.46$\pm$0.00 |
| Bonsai | 214.55$\pm$1.13 mins | 51.75$\pm$0.01 | 58.47$\pm$0.91 hrs | 64.43$\pm$0.31 |
| FAT | 101.14$\pm$12.79 mins | 51.92$\pm$0.04 | 28.19$\pm$1.15 hrs | 56.21$\pm$0.48 |
The results aligned with our discussion. Bonsai requires much more time for the additional synthetic round and much more memory when there are 3 or more rounds. In contrast, FAT achieves the best OOD performance without introducing additional much too computational overhead.
> Q1 Claim Lma 3.2 to supplementary part.
Lma 3.2 introduces key concepts of feature learning in neural networks by decomposing weights into two parts. While we agree moving Lma 3.2 will improve clarity, it could make readers who are not familiar with feature learning literature hard to follow our results (e.g., Reviewer Jcjd).
> Q2 The details of Fig.2 are important.
We included the details in our updated draft: the invariant and spurious feature learning terms plotted in Fig. 2 are the mean of $\langle \mathbf{w}\_{j,r}, j\mathbf{v}\_1 \rangle$ and $\langle \mathbf{w}\_{j,r}, j\mathbf{v}\_2 \rangle$ for $j\in \lbrace\pm 1\rbrace, r\in[m]$, respectively.
We’d appreciate it if you could take the above responses into consideration when making the final evaluation of our work. Please let us know if there are any outstanding questions.
---
Rebuttal 2:
Title: A summary of rebuttal in response to Reviewer KJZ8
Comment: Dear Reviewer KJZ8,
We want to thank you again for reviewing our paper. We have responded to each of the weaknesses/questions you raised in the review. In summary,
- We have revised our paper to improve the readability of our symbol system and details of Fig.2 following your suggestion;
- We have discussed the quantitative results of Thm 4.1 and compared the computational complexity of our method with other competitors.
We would appreciate it if you could take a look at our responses and let us know if any of your remaining concerns are not addressed, and we would try our best to address them.
---
Rebuttal 3:
Title: A gentle reminder for the closing rebuttal window
Comment: Dear Reviewer KJZ8,
We would like to remind you that the rebuttal will be closed very soon. To allow us sufficient time to discuss with you your concerns about our work, we would appreciate it if you could take some time to read our rebuttal and give us some feedback. Thank you very much. | Rebuttal 1:
Rebuttal: Dear reviewers,
We thank the reviewers for their many helpful comments and suggestions.
Most reviewers agree that our theoretical findings are interesting, important and useful (KJZ8, NB5i, qcQH). The insights we obtained deepen the understandings of feature learning under distribution shifts (qcQH) and serve as a solid motivation for learning rich features for better OOD generalization (NB5i, qcQH). All reviewers agree that the proposed solution FAT is novel and the empirical improvements are strong and promising.
Regarding the reviewers’ concerns, we believe they can be addressed and we have revised our draft according to the reviewers’ valuable suggestions. In the following, we give our responses to the reviewers’ concerns and suggestions.
1. Regarding the linear setting (Jcjd, qcQH), we’d like to clarify that, because this is the **first** work to theoretically analyze the feature learning of ERM and OOD objectives and their interactions in OOD generalization:
- We chose a **minimal setting** where we can observe all the necessary phenomena and which enables us to study complicated OOD objectives (e.g., IRMv1) that involve high-order derivatives. **The results with IRM regularization we obtained are not obvious under non-convexity, despite the linear activation**.
- Nevertheless, **we also show that our key theoretical results could be generalized to the non-linear setting**. Please find the details in [our response to Reviewer qcQH](https://openreview.net/forum?id=eozEoAtjG8¬eId=KmRvdQIZ2C) due to the character limits.
- Going beyond the linearity of activations, the experiments in Fig. 1b show that, our theoretical results align with the empirical discoveries in more complex settings.
2. Regarding more analysis of feature learning (NB5i, qcQH), we plotted the saliency maps to better understand the feature learning of different algorithms. The figures are attached to the rebuttal pdf and to our latest draft as well.
- We first visualize the feature learning of ERM and FAT on ColoredMNIST-025. It can be found that ERM can learn both invariant and spurious features to predict the label, aligned with our theory.
- However, ERM focuses more on spurious features and even forgets certain features with longer training epochs, which could be due to multiple reasons such as the simplicity biases of ERM. Hence predictions based on ERM learned features fail to generalize to OOD examples. In contrast, FAT effectively captures the meaningful features for all samples and generalizes to OOD examples well.
- We also visualize the saliency maps of ERM, Bonsai and FAT on all real-world datasets used in our work. It can be found that, across various tasks and data modalities, FAT effectively learns more meaningful and diverse features than ERM and Bonsai, which serve as strong evidence for the consistent superiority of FAT in OOD generalization.
3. Regarding the computational complexity (KJZ8) and memory cost (Jcjd, qcQH), compared to ERM, the additional computational and memory overhead introduced in FAT mainly lie in the FAT training and partitioning.
- At each training step, FAT needs $(k-1)$ additional forward and backward propagation, the same as Bonsai, while FAT only needs $\min(1,k-1)$ additional propagation. Besides, Bonsai additionally requires another round of training with $(K-1)$ additional propagation, given $K$ total rounds.
- Although FAT needs to store all $D^a_i$, $D^r_i$ and $w_i$ yielded in previous rounds, the storage of these items does not cause much a memory issue, while **training using the FAT objective (Eq.7) with all previous subsets can lead to OOM for a large network**, as Eq. 7 increases the batch size by a factor of $k$ for the $k$-th round. Note that typically, the batches are also sampled from each environment as shown in Table 9. Let the number of sampled domains be $d$, and the batch size from each domain be $b$, then each round will additionally introduce $2d\times b$ samples in each minibatch, which is typically more than 200 and results in OOM for a V100 GPU with around 32G GPU memory.
- We calculated the computational and memory overhead in the table below. The results aligned with our discussion. Bonsai requires much more time for the additional synthetic round and much more memory when there are 3 or more rounds. In contrast, **FAT achieves the best OOD performance without introducing too much additional computational overhead**.
| | Camelyon17 | | CivilComments | |
|--------|-----------------------|----------------|--------------------|----------------|
| | Training time | Memory (%) | Training time | Memory (%) |
| ERM | 56.21$\pm$8.29 mins | 22.56$\pm$0.00 | 24.22$\pm$0.33 hrs | 36.46$\pm$0.00 |
| Bonsai | 214.55$\pm$1.13 mins | 51.75$\pm$0.01 | 58.47$\pm$0.91 hrs | 64.43$\pm$0.31 |
| FAT | 101.14$\pm$12.79 mins | 51.92$\pm$0.04 | 28.19$\pm$1.15 hrs | 56.21$\pm$0.48 |
Besides, we have revised our paper to correct the typos and inconsistencies, provide more intuitive explanations of several key concepts and designs, and add a table of notations for easy reference.
In addition, we have provided a link to our code for reproducing the results in our paper to the AC.
Please let us know if you have any further questions. Thanks.
Pdf: /pdf/0db06ffdbee81ecd527e7e956ceb3079573df751.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Black-Box Differential Privacy for Interactive ML | Accept (poster) | Summary: This work proposes a novel method to apply DP to the context of interactive ML.
Strengths: The method itself is an interesting proposal, well-supported with proofs and theory. This work also extensively describes prior advances in the field in great detail.
Weaknesses: The paper has numerous issues with the way it is written. Firstly, there should not be any citations in the abstract, so, please reword it. Secondly, and much more importantly, this paper lacks a proper conclusion. There is no way for the reader to get a clear overview of the implications of this work as well as its relevance to the broader ML community. Instead, the work ends with a large algorithm definition, which really either belongs to the methods section or to the appendix. While I understand the overall idea of what the work proposes, there are no clear contributions either. I strongly encourage authors to rework these, as otherwise it is incredibly difficult to identify the merits of their method.
Certain sections really don’t belong where they are right now: 1.1 in my eyes has nothing to do with the introduction; example 2.6, while helps contextualise the method is really taking the space away from the rest of the work; many algorithms can be placed in the appendix etc. Point being that the work is very difficult to process because some crucial sections are missing, yet some sections which were included really do not add the value to the submission.
Finally, it looks to me as this is a purely theoretical paper, there are no experiments to support what the authors proposed. Given that the submission is presented as application-driven and useful for ML practitioners, this is rather odd.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Are there any practical results to showcase the advantages of the proposed method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: Overall, I do not see many reasons to accept this work in its current state: While the method looks promising and may have broad implications for the use of DP in interactive setting, the manuscript does not do the method any justice and really struggles to make this accessible to a wide range of ML researchers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **> there should not be any citations in the abstract**
This is a minor issue, and we are willing to remove the citations
**> this paper lacks a proper conclusion; there are no clear contributions; it is incredibly difficult to identify the merits; the manuscript does not do the method any justice**
We have two main contributions:
1. We present a meaningful and new privacy model for online classification.
2. We present a new algorithm satisfying our privacy notion, that exhibits a doubly exponential improvement in the error compared to prior works on private online classification.
We believe that these two contributions constitute a significant story, and will make this clearer in the paper.
**> Certain sections really don't belong where they are right now**
While our algorithms are relatively simple, the proofs are non-trivial and could not fit the main body at its current form. We are open to reorganizing the paper, and will do our best (within the page limit) towards providing more proof details and insights.
**> this is a purely theoretical paper, there are no experiments... Are there any practical results?**
This is a theory paper. We agree that concrete bounds matter, but asymptotic bounds are not less important, and the asymptotic improvements we obtain are huge (polynomial overhead vs double exponential). We believe that our work would lead to future studies on this topic, both theoretically and practically oriented.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I would like to thank the authors for their response to my comments.
I am still, however, not convinced that A) the paper is fir for acceptance in its current state (partially because it is missing crucial components such as clear conclusion, contributions etc.) and B) it is possible to address the reviewer comments in time for the deadline. Therefore my score remains unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment.
**> missing crucial components such as clear conclusion, contributions**
As we mentioned in the rebuttal, our work presents a meaningful and new privacy model for online classification, together with a new algorithm that exhibits a *doubly exponential improvement* in the error compared to prior works on this topic. We would appreciate any specific comments or feedback as to why these contributions are not "clear". | Summary: The paper addresses the problem of privacy preserving interactive learning. In this problem, a recommendation algorithm improves its model by answering private queries performed by a set of parties in sequential rounds. The responses to users queries should adapt to the query made by each party. Therefore, in order to provide an accurate service, the algorithm's responses must be strongly sensitive to the users.
This imposes an important restriction on the privacy guarantees that can be provided using the classical notion of differential privacy (DP): classical DP requires that all responses to queries are equally privacy preserving. Therefore, if the algorithm satisfies this notion of privacy, its accuracy is limited.
To tackle this problem, the contribution proposes challenge DP: a relaxed notion of DP in which the mechanism is allowed to provide a sensitive output to the party that generated the query, while still providing a less sensitive view to other parties. Using challenge DP, the contribution proposes a construction that, only by having black-box access to a interactive learning algorithm, can construct
a privacy preserving algorithm. The construction improves the accuracy of previous work, reducing the number of errors from exponential to quadratic in d, where d is the number of mistakes made by the black-box algorithm. Furthermore, these improvements are achieved while assuming an adversary that participates in the learning process controlling multiple parties and that can adapt its input in one round of the protocol.
Strengths: The paper is very well written and key aspects of the contribution are presented clearly. The contribution is novel: the relaxation of classical DP and the POP construction are clever. The results are significant, as improvements with respect previous work are important while
assumptions on the adversary remain realistic. The overall quality is good.
Weaknesses: I only have fairly minor remarks with respect to the presentation of the contribution. While the introduction to the problem is very nicely achieved, the space dedicated to the main contributions is a bit short. I would have appreciated if it had been bigger and had provided more details. Also, there are multiple DP definitions (such as Definitions 2.1, 2.7 and 2.9) that overload the statement "X satisfies (epsilon, delta)-DP" and then understanding which algorithm satisfies which definition of DP becomes a bit confusing. Different terminology for each case could help in clarity.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1- In the definitions of DP under adaptive queries (Def 2.7) and adaptive inputs (Def 2.9), it seems that the neighboring databases are
always defined with respect to the bit b. As I understand, this means that the database only changes on the query in which the adversary
plans to be adaptive. I have the impression that this might not reflect some realistic scenarios in interactive ML. For example, consider the case in which the adversary controls all the parties except of one party: Alice. It would be nice that the output (i.e. the view of the adversary) is differentially private with respect to the participation of Alice or not (which, in my understanding, is independent of bit b). Do the aforementioned definitions contemplate this case?
2- In Theorems 4.1 and 4.2, you show how the POP construction (Algorithm 7) performs when it assumes an adversary that can adapt in a single round. However, you define challenge DP to be compatible with many adaptive/challenge rounds (group privacy). How an increase in the number of challenge rounds allowed to the adversary (i.e. an increase of g) would impact in the privacy/accuracy trade-offs?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: While theoretical improvements are significant, more experimentation is required to understand if the practical deployment of the construction is feasible.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **> The space dedicated to the main contributions is a bit short**
While our algorithms are relatively simple, the proofs are non-trivial and could not fit the main body at its current form. We are open to reorganizing the paper, and will do our best (within the page limit) towards providing more proof details and insights.
**> In definitions 2.7 and 2.9 ... it seems that the neighboring databases are always defined with respect to the bit b. As I understand, this means that the database only changes on the query in which the adversary plans to be adaptive**
The adversary is adaptive throughout the execution: The bit b indexes two thought experiments which the adversary is trying to distinguish between. At every step of the execution, the adversary adaptively determines an input to be included in the execution, except for Alice’s input where the adversary specifies *two* possible inputs and only one of them is included in the execution (based on the value of the bit b which is unknown to the adversary). The adversary's goal is to distinguish between the executions with b=0 and b=1. In other words, the adversary tries to figure out which of Alice's inputs was included in the execution.
Adaptivity on every step makes a strong adversary. In particular, it might adaptively choose future inputs in order to try and guess the bit b (i.e., try to guess Alice's input).
**> Consider an adversary controlling all the parties except for Alice. It would be nice if the view of the adversary is DP with respect to the participation of Alice or not**
As is standard in the literature, these privacy notions could be stated in two flavors: Either we protect against an arbitrary *change* to one individual's data (say Alice), or we protect against the *addition/removal* of one individual. These variants also exist w.r.t. the standard definition of DP, and are very similar. Definitions 2.7 and 2.9 are stated w.r.t. the first option, but they could easily be modified for the second option (without any real effect on the rest of the paper).
**> In Theorems 4.1 and 4.2... how an increase in the number of challenge rounds allowed to the adversary (i.e. an increase of g) would impact the privacy/accuracy trade-offs?**
This could be obtained from our group-privacy theorem (Theorem 3.4). Specifically, Theorem 3.4 shows that POP (as is, with the same utility guarantees) satisfies $(g\cdot\varepsilon , g\cdot e^{\varepsilon g}\cdot \delta)$-Challenge-DP. Alternatively, by rescaling $\varepsilon$ and $\delta$, you could get an $(\varepsilon,\delta)$-Challenge-DP algorithm whose utility guarantees are degraded by a roughly $g^2$ factor (compared to Theorem 4.2).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses. Now my doubts are clarified. While the presentation could be improved by restructuring the content of the main text, I think the work presents a strong contribution that importantly reduces the overhead with respect to previous solutions. Therefore my score is likely to remain unchanged. | Summary: This paper studies privacy in the setting of interactive machine learning processes. Challenge DP is presented as a new relaxation of DP that is satisfies many of the desirable properties of DP and any non-private online prediction algorithm can be constructed into a Challenge DP online prediction algorithm.
Strengths: The paper tackles an important problem and provides a nice motivating example in the setting of a continually improving chatbot. It is clear from the example that the interaction is adaptive. Rather than requiring that the interaction with the chatbot be differentially private, they study a relaxation called joint differential privacy, allowing the transcript of responses to depend arbitrarily on each individual’s prompts, but the chatbot should not leak too much information about others’ conversations in each response. There has been work for this problem in the setting of (traditional) DP, but this is the first work to study mistake bounds in the setting of Joint DP, or more specifically Challenge DP (interactive variant of JDP), which they define and prove basic properties of.
Weaknesses: The motivating example is with a chatbot, but the problem setting is for private online classification, which might not be a relevant problem for a chatbot, as it is not clear what the labels or mistakes of a chatbot would be. Is there a motivating example that is more related to private online classification?
The only proof in the paper is proving group privacy of challenge DP. Although group privacy is an important property of any privacy definition, I do not think it adds to the overall narrative of the paper and not surprising as challenge DP is still related to DP. The main results of this paper are in Theorems 4.1 and 4.2, so the paper should highlight the analysis there, or at the least provide a proof sketch. From the introduction, it is not clear why the reader should care about a group privacy property. Furthermore, designing challenge DP algorithms is not clear and its relation with creating an online game that is DP is not clear (as in Theorem 4.1).
Although the problem is well motivated, the full story still seems incomplete. In particular, is there an actual gap between what is achievable under joint DP and (traditional) DP? Currently, there is an existing DP approach that achieves very large mistake bound and this paper achieves a much better mistake bound, but is there a lower bound result?
Is there just a typo in Algorithm 1 and 6 where noise is not added to the threshold? The privacy analysis of AboveThreshold depends on noise being added to the threshold that is reused at each round that is not above the noisy threshold. I am willing to increase my score if this is merely a typo but would like the authors to verify.
Minor:
- \mathcal{M} denoted mistake bounds and then in Definition 2.5 \mathcal{M} is a mechanism.
- Line 116 “if the algorithm errs than at least”
- What is the optimal JDP approach for example 2.6? Seems like you need to know the median to know which ones are above or below.
- Footnote 1 on page 2, “as =composition”.
- Line 309 “That is, For every”
[Update] I have increased my score based on the rebuttal.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: is there an actual gap between what is achievable under joint (challenge) DP and (traditional) DP?
Is there just a typo in Algorithm 1 and 6 where noise is not added to the threshold?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The problem setting is for private online classification, yet the relation to a chatbot example is not quite clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **> The motivating example is with a chatbot... is there an example that is more related to private online classification?**
1. A hospital conducting a study on a new disease might use private online classification to predict the risk of an individual having this disease (based on available tests and medical history).
2. A bank might use private online classification to decide whether or not to grant an individual a loan.
3. An online seller might use private online classification to decide whether to suggest a promotion or not to a user.
**> The only proof in the paper is proving group privacy of challenge DP**
While our algorithms are relatively simple, the proofs are non-trivial and could not fit the main body at its current form. We are open to reorganizing the paper, and will do our best (within the page limit) towards providing more proof details and insights.
**> the full story still seems incomplete... is there an actual gap?**
We are not aware of a lower bound / separation result. In our work, we present a meaningful and new privacy model for online classification, together with a new algorithm that exhibits a doubly exponential improvement in the error compared to prior works on this topic. We believe that this tells a significant story, even without a separation result.
**> typo in Algorithm 1 and 6 where noise is not added to the threshold? I am willing to increase my score if this is merely a typo**
This is not a typo. It is true that the standard presentation of AboveThreshold utilizes a noisy threshold, which allows it to satisfy pure (eps,0)-DP. But the algorithm remains private even without the noise on the threshold, in which case it satisfies approx (eps,delta)-DP. This appears, for example, in [Hardt and Rothblum, 2010] and [Kaplan et al., 2021] (which we cited).
In the standard formulation of AboveThreshold, we add noise to the threshold, and resample this noise after every "above" answer. This is OK in the standard setting where the queries are considered to be non-private (and only the data points are private). However, as we mentioned in the beginning of Section 4, in our variant (ChallengeAT), the queries are also considered to be private information. Therefore, resampling the noise is problematic as it depends on the current query in a non-private way. (Replacing a constant 0 query with a constant 1 query would typically trigger a new noise sampling.) In contrast, we needed to ensure that the rest of execution behaves similarly no matter what the current query is, and we don't want one query to generate a new noise sampling while the other do not.
We would be grateful if you could increase your score, as you suggested.
(We remark that there are variants of AboveThreshold in which we add noise to the threshold in the beginning of the execution, and never re-sample it, even after an "above" answer. This would be applicable in our setting, but the "noiseless" version helps to slightly simplify our analysis.)
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal. Thanks for answering my questions and for clarifying the lack of noise added to the threshold. I was not aware of the variant that did not add noise to the threshold. As promised I will increase my score. I do feel that the story is still incomplete without addressing whether there is a gap or not.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment and for updating the score! We truly believe that challenge DP is a meaningful privacy notion in our setting (and it seems that all of the reviewers agree with us on that point). The fact that we were able to leverage it in order to get a **doubly exponential improvement** is, in our opinion, a very interesting story (even if it raises open questions, which we hope will be addressed by future work). | Summary: The authors propose a new differential privacy definition with desirable online learning properties. In this new variant named Challenge Differential Privacy, the adversary can observe the output of a sequence of online queries, except a "challenge query". In this query, two possible pairs of inputs are picked by an adversary. Based on the transcript of the queries without this challenge step, the adversary needs to decide which data point has been used for the hidden query. The authors introduce an algorithm that satisfies Challenge-DP, in which an ensemble of expert models is used to decide on a majority answer, but only one of the experts is allowed to use the ground truth label of the given sample.
Strengths: * The setting proposed by the authors is interesting. Indeed, the standard definition of Differential Privacy is not well-tailored for online learning.
* The authors discuss and prove the fundamental properties of Differential Privacy for their proposed variant: post-processing, composition, and group privacy.
* The authors propose a technique to achieve ChallengeDP, POP, for which they introduce privacy and utility guarantees.
* The work is compositional; it does not introduce new learning algorithms, but it introduces a technique to convert an existing learning algorithm to a private one that satisfies ChallengeDP.
Weaknesses: * The current work is heavily reliant on the appendix. Consider making the main statements self-contained (ChallengeDP, POP).
* The paper ends with a theorem statement. There is no conclusion/future work/discussion, giving the feel of possibly unfinished work.
* $\overset{\sim}{O}$ notation inside $O$ notation (theorem 4.4) makes it very hard for the reader to compare it to other work/really understand at a granular level the privacy guarantees. As other results have previously shown, constants/linear terms matter a lot in differential privacy; please consider adding explicit guarantees (maybe in the appendix); otherwise, it is hard to understand the benefits of using this approach.
* I would strongly suggest the authors state more clearly that in this online setting, the training algorithm that sees a sample uses it as a training point afterwards (possibly a more explicit protocol like the one in Naor et al. [0], figure 1 could be used).
* A clearer discussion about the difference between private everlasting predictions from Naor et al. [0] and challenge differential privacy could be useful, and why it is a generalization of their work (as stated in the abstract). I acknowledge that the authors provided remark 3.2, but this could be more in-detail justified and discussed, as it is one of the paper's key contributions.
[0]: https://arxiv.org/abs/2305.09579
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * The authors provided concise and clear proof for composition in the interactive setting, but I assume that this setting satisfies even the generalization of composition, namely concurrent composition[0], which is even more suitable for the described problem proposed by the authors. Did the authors consider proving/looking into concurrent composition, as it seems that is a definition well suited for this setting
* In this setting, as far as I understand, the release of the learning algorithms results in a total loss of privacy, is that true, or could it be bounded/there might be a possibility of making a model release in this setting (considering that the initial learning algorithms, before the stream of queries, is not private).
* Do the authors believe that this definition of privacy could replace $(\epsilon, \delta)$-DP for machine learning tasks, or should it be an alternative?
* Is there an equivalence/relationship that the authors observed between $(\epsilon, \delta)$-DP and ChallengeDP?
[0]: https://arxiv.org/abs/2207.08335
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: * Their proposed technique involves maintaining multiple copies of the same model, which is a possibly prohibitive approach when solving the proposed problem in the introduction, given the significant size of expert models.
* The paper ends with a theorem statement. There is no conclusion, further work or discussion. I am willing to increase the score of the paper if the presentation at the end is improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **> heavily reliant on the appendix**
While our algorithms are relatively simple, the proofs are non-trivial and could not fit the main body at its current form. We are open to reorganizing the paper, and will do our best (within the page limit) towards providing more proof details and insights.
**> no conclusion/future work/discussion**
We will add a conclusion section. The short summary is that our work presents a meaningful and new privacy model for online classification, together with a new algorithm that exhibits a doubly exponential improvement in the error compared to prior works on this topic.
**> constants/linear terms matter a lot; consider adding explicit guarantees**
We agree that concrete bounds matter, but asymptotic bounds obtained in theoretical work are not less important. The asymptotic improvements we obtain are huge (polynomial vs double exponential overhead). We believe that our work would lead to future studies on this topic, both theoretically and practically oriented.
**> clearer discussion about the difference from Naor et al**
Naor et al. considers a setting where the algorithm initially gets a dataset S containing labeled examples from n users. Then, on every time step, a new user arrives and submits its unlabeled example to the algorithm (and the algorithm responds with a predicted label).
We generalize their definition to our setting. Specifically, we extend the definition to capture settings in which every user *interacts* with the algorithm (rather than just submitting its input). In the concrete application we consider (online learning) this corresponds to the user submitting its input, then obtaining the predicted label, and then submitting the "true" label. Our generalized definition (Section A in the supplementary) captures this as a special case and allows for arbitrary interactions with each user. We will make this clearer in Remark 3.2.
**> Did the authors consider proving/looking into concurrent composition**
We did not. We believe that the notion does indeed satisfy concurrent composition. Though this seems to require slightly generalizing the existing concurrent composition theorems, because (as they are stated) they assume that the dataset(s) are fixed in the beginning of the execution. In our case the dataset is "evolving".
**> the release of the learning algorithms results in a total loss of privacy, is that true?**
Correct. This "relaxation" is what allowed us to improve the bounds given in prior work (by a doubly exponential factor).
**> Do the authors believe that this definition could replace DP or should it be an alternative?**
We view it as an alternative that, as we show, would allow you to get better utility in some cases.
**> Is there an equivalence/relationship that the authors observed between DP and ChallengeDP?**
The models are not equivalent. The work of Naor et al. showed that PAC learning is possible with ChallengeDP for any concept class with finite VC dimension, which is known not to be the case with the standard notion of DP.
---
Rebuttal Comment 1.1:
Comment: I appreciate the responses, and I am also looking forward to the discussions with the other reviewers. Overall I consider this work interesting, but there are plenty of improvements to be done on the presentation side.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. As we mentioned in the rebuttal, we will add a conclusion section and we will do our best to reorganize the paper. We would appreciate any specific comments or feedback that could further enhance the clarity. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Train Once and Explain Everywhere: Pre-training Interpretable Graph Neural Networks | Accept (poster) | Summary: This paper proposed a novel pre-training method for interpretable graph neural network. Interpretable GNN is currently an important research issue and has attracted rising research attention recently. However, exiting methods are generally designed for some special types of datasets, and thus are hard to generalize well to other graphs or tasks. This paper makes the first attempt to design a pre-training framework with a carefully developed labeled synthetic graph dataset. As the synthetic graphs have ground truth explanations, the pretained model can better capture the common structure knowledge of different types of graphs. A structure pattern learning module and a hypergraph refining module are also proposed to make the pretained model achieve better performance. Generally, the paper is very well written and has clear contributions. It is a nice try to design pre-trained interpretable GNNs. The experiment is extensively studied over multiple datasets, and the performance improvement over current SOTA is significant. The evaluation on node classification and graph classification tasks further verifies the promising generalization ability of the model.
Strengths: (1) This paper for the first time proposed a pre-training interpretable GNN model, which is novel and practically important. The pre-trained model is first trained over a large synthetic graph datasets, and then fine-tuned over downstream graphs, which achieves significant performance improvement. As a general and pioneer pipeline, I believe the paper can potentially motivate a lot of following works.
(2) A structure pattern learning module and a hypergraph refining module are proposed and integrated into the pre-training model. The two modules can better and more comprehensively capture the structural patterns and the edge interactions of the graphs.
(3) The experiment is extensively studied over both real and synthetic graph datasets. The results show the significant performance improvement over current SOTA. The experiment is convincing and the code is publicly available.
(4) The paper is very well written and organized in good logic. It is easy to read and follow.
Weaknesses: (1) As the constructed synthetic graphs with ground truth labels may significantly affect the performance of the pre-training model. How to construct a good synthetic graphs should be clearly explained. Currently, it is not clear enough on how to construct the data. More details on the synthetic datasets construction should be provided.
(2) It is not very clear to me why the authors need to give Theorem 1 and 2. Based on my understanding, the authors want to prove that the learning edge representation can fully preserve the graph pattern information. If it is the case, the authors should explain it more clearly and describe why it is important for interpretable GNN.
(3) In the evaluation, pi-GNN is also compared with pi-GNN-DFT, the authors explained that pi-GNN-DFT directly fine-tunes on downstream datasets without pre-training. How to implement that exactly is not explained.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) How to construct a good synthetic graph datasets for pretaining? Are the ground truth GNN explanations labeled manually?
(2) In section 4.3, the authors evaluate the model pre-trained over graph classification dataset on the explanation task of node classification to evaluate the inter-task generation performance. How to do that exactly? The learned graph embeddings learned from graph classification is used for node classification?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations of their work in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: As the constructed synthetic graphs with ground truth labels may significantly affect the performance of the pre-training model. How to construct good synthetic graphs should be clearly explained. Currently, it is not clear enough on how to construct the data. More details on the synthetic dataset construction should be provided. How to construct a good synthetic graph dataset for pretraining? Are the ground truth GNN explanations labeled manually?
A1: Thanks for your suggestion.
1). We will add the generation details of the synthetic dataset in Appendix. Following existing works [1, 2], each synthetic graph consists of a base and an influential motif and the label is determined by the motif solely. Given one certain motif, we first sample a base from a uniform distribution over all bases and the motif is attached a randomly selected node in the base.
2). According to our experimental results, a good synthetic graph dataset for pre-training should first be balanced. Moreover, larger dataset size and more motif classes are also promising to guarantee the effectiveness of pre-training process.
>Q2: It is not very clear to me why the authors need to give Theorem 1 and 2. Based on my understanding, the authors want to prove that the learning edge representation can fully preserve the graph pattern information. If it is the case, the authors should explain it more clearly and describe why it is important for interpretable GNN.
A2: Thanks for your comments. Your understanding is absolutely correct and we will add more explanation about the importance of the edge representation with regard to interpretable GNNs. The edges are more essential to GNN explanation compared with nodes, as the previous works [3, 4] point out. Therefore, we need a more expressive representation of edges and thus can provide a better GNN explanation.
>Q3: In the evaluation, pi-GNN is also compared with pi-GNN-DFT, the authors explained that pi-GNN-DFT directly fine-tunes on downstream datasets without pre-training. How to implement that exactly is not explained.
A3: Thanks for your suggestion. We will append more details of the directly fine-tuning paradigm in the final version. Specifically, pi-GNN-DFT skips the pretraining process on the synthetic dataset and conduct standard training on the downstream datasets just like the other baselines.
>Q4: In section 4.3, the authors evaluate the model pre-trained over graph classification dataset on the explanation task of node classification to evaluate the inter-task generation performance. How to do that exactly? The learned graph embeddings learned from graph classification is used for node classification?
A4: Thanks for your comment. We will append more details of the inter-task evaluation in the final version. When evaluating the inter-task interpretation performance, the whole graph of the node classification task is fed into the pretrained explainer and the explainer will identify the explanatory subgraph of each target node in test set. The interpretation performance is evaluated on the target nodes.
[1] GNNExplainer: Generating Explanations for Graph Neural Networks. NeurIPS 2019.
[2] Graph Information Bottleneck for Subgraph Recognition. ICLR 2021.
[3] Discovering Invariant Rationales for Graph Neural Networks. ICLR 2022.
[4] Parameterized Explainer for Graph Neural Network. NeurIPS 2020.
---
Rebuttal 2:
Comment: Dear reviewer 34Ne:
Thanks again for your insightful comments, which, we believe, are very important to improve our paper.
In the rebuttal and submitted one-page pdf, we have tried to answer your questions one by one.
If you have further questions, we are very happy to discuss them.
---
Rebuttal Comment 2.1:
Title: Response to rebuttal
Comment: The authors have addressed my concerns in the rebuttal, and I will keep my socre as clear accept.
---
Reply to Comment 2.1.1:
Comment: Thanks again for your positive opinion of our paper and your valuable comments. | Summary: This paper presents a novel method for GNN explainability. The key innovation of this method is that of relying on synthetic graphs with known explanations to pretrain the model. The pretraining helps to learn general explainability patterns, introducing an inductive bias. Such patterns are then aggregated and refined through specific modules. The method is experimentally evaluated on synthetic and real-world graphs, and compared to different methods recently published in the literature.
Strengths: - The manuscript is well-written and relatively easy to follow.
- The empirical performances of the reported experiments are very compelling.
Weaknesses: - The main weakness in my opinion is that the dependency between the test tasks and the synthetic pre-training dataset is not well investigated. However, the synthetic dataset is probably one of the most important hyperparameters, which could be difficult to optimize in rael-world settings. I think that the authors should evaluate the impact of different pretraining datasets on different test tasks, also making sure that some test tasks include motifs not fully covered in the pretraining tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See previous point.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: The main weakness in my opinion is that the dependency between the test tasks and the synthetic pre-training dataset is not well investigated. However, the synthetic dataset is probably one of the most important hyper-parameters, which could be difficult to optimize in real-world settings. I think that the authors should evaluate the impact of different pre-training datasets on different test tasks, also making sure that some test tasks include motifs not fully covered in the pre-training tasks.
A1: Thanks for your suggestion.
1). It is true that the synthetic pretraining dataset is of great importance to the downstream tasks and how to select a good pre-training dataset may be difficult in real-world settings. To further evaluate the impact of different pretraining datasets on different test tasks, we have added extended experiments on pre-training datasets with different classes of motifs. As we state, the PT-Motifs dataset contains 5 motifs (i.e., Diamond, House, Crane, Cycle, and Star). To investigate the impact of motifs in pre-training dataset, we generate two dataset that contains only 3 motifs. Specifically, PT-DCrS contains Diamond, Crane, and Star and PT-DCyH contains Diamond, Cycle and House. Note that the House and Cycle motifs in BA-2Motifs are not covered by PT-DCrS. The result is reported in the following table.
|Interpretation|BA-2Motifs|Mutag|
|-|-|-|
|PT-Motifs|99.33|99.81|
|w/o pre-train|93.19|95.29|
|PT-DCrS|95.53|98.56|
|PT-DCyH|99.04|98.31|
|Prediction|Molhiv|Graph-SST2|
|-|-|-|
|PT-Motifs|80.86|88.05|
|w/o pre-train|79.71|83.48|
|PT-DCrS|79.95|87.26|
|PT-DCrS|79.98|88.02|
The results demonstrate that even when the pre-training dataset does not cover all the motifs in downstream datasets, the pre-training process can still improve the interpretation and prediction performance. Moreover, when fine-tuning on BA-2Motifs, pre-training on PT-DCyH significantly outperforms that on PT-DCrS.
2). Moreover, we investigate the impact of imbalanced datasets. Following existing work [3], each synthetic graph consists of one base subgraph $G_b$ and one explanatory subgraph $G_e$. To generate the imbalanced datasets, we sample the explanatory $G_e$ from uniform distribution, while the base $G_b$ is determined by $P(G_b) = b\times I(G_b=G_e) + (1-b)/4\times I(G_b\neq G_e)$. Therefore, we can manipulate the hyperparameter b to control the imbalance degree and the imbalance degree is defined as $i=4b/(1-b)$. The experimental results of imbalanced pre-training datasets are listed in the following tables.
|Interpretation|BA-2Motifs|Mutag|
|-|-|-|
|Balanced|99.33|99.81|
|w/o pre-train|93.19|95.29|
|b=0.3, i≈1.7|98.91|98.46|
|b=0.5, i=4.0|96.15|97.10|
|b=0.7, i≈9.3|93.63|94.30|
|Prediction|Molhiv|Graph-SST2|
|-|-|-|
|Balanced|80.86|88.05|
|w/o pre-train|79.71|83.48|
|b=0.3, i≈1.7|79.42|85.34|
|b=0.5, i=4.0|77.18|83.14|
|b=0.7, i≈9.3|75.04|81.90|
The results demonstrate that when pretraining on an imbalanced dataset, the performance improvement is less significant than that on the balanced one, but still better than that without pretraining.
3). Compared with PT-Motifs (80000 graphs) in our paper, we generate PT-Motifs-M with 50000 graphs and PT-Motifs-S with 10000 graphs. The experimental results on the pretraining datasets with different scales are listed in the following table.
|Interpretation|BA-2Motifs|Mutag|
|-|-|-|
|PT-Motifs (80,000)|99.33|99.81|
|w/o pre-train|93.19|95.29|
|PT-Motifs-M (50,000)|98.91|99.06|
|PT-Motifs-S (10,000)|97.07|96.28|
|Prediction|Molhiv|Graph-SST2|
|-|-|-|
|PT-Motifs (80,000)|80.86|88.05|
|w/o pre-train|79.71|83.48|
|PT-Motifs-M (50,000)|80.77|87.59|
|PT-Motifs-S (10,000)|79.82|85.23|
The results show that even the PT-Motifs-S is able to outperform the model without pre-training. Moreover, we can notice that the large pre-training dataset can indeed improve the performance more significantly than that with small size.
[1] Discovering Invariant Rationales for Graph Neural Networks. ICLR 2022.
[2] Parameterized Explainer for Graph Neural Network. NeurIPS 2020.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the response and for the careful analysis, which helps elucidate the connection between test tasks and pretraining datasets. This was my main concern reading the paper. I confirm my acceptance score of 6.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your careful reading, valuable comments, and constructive suggestions, which have significantly improved our manuscript. | Summary: The paper proposes a generalizable GNN interpretation model, aiming to learn the universal structural patterns of graphs so that it can be applied any downstream applications.
Strengths: (1) The problem that the paper studies is very interesting, a model trained to identify the universal explanatory subgraph in different cases will be of great practical use.
(2) Overall, the paper is easy to follow.
(3) The authors provide theoretical analysis to support their claim.
Weaknesses: (1) Although the paper provides empirical study to demonstrate the effectiveness of the hypergraph refining module, the motivation to incorporate such a component is still not very clear. What is the advantage of the hypergraph refining module compared with a normal graph?
(2) Since the explanation model is pre-trained, the authors are encouraged to incorporate some graph SSL methods as baselines for fair comparison (like GraphCL and GraphLoG).
(3) It seems the synthetic pretraining dataset is of great importance to the final results, therefore more empirical studies should the conducted to investigate the impacts of the pretraining dataset on the final results, like the dataset size and distribution. What if the synthetic dataset is imbalanced and what if the distribution of the pretraining dataset and downstream datasets are not aligned? Also, I wonder about the cost to generate such a synthetic dataset.
(4) The authors are encouraged to provide the training details of the baselines.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Refer the weakness section
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: The motivation of the hypergraph refining module and its advantages.
A1: For GNN explainers, the edges and the edge interactions are more essential compared with nodes and node interactions [1,2]. Therefore, we need a more expressive edge representation learning paradigm. To capture the edge interactions, we propose to exchange the roles of edge and node, and then perform the message passing mechanism. In this way, the edges can interact with each other via message passing and the interactions among edges can be captured by the learned representations. However, after exchanging roles, each edge may connect with several edges via a single node, where the normal graph neural networks cannot be directly applied. Hence, we propose the hypergraph refining module to implement the expressive learning on edge representations, where the edges are converted to hyper-nodes and the nodes become hyper-edges. The ablation study shows that, compared with normal GNN models, adopting the proposed hypergraph refining module improves the GNN explanation performance from 94.42% to 99.81% on Mutag dataset, which verifies the effectiveness of the hypergraph refining module.
>Q2: Incorporate some graph SSL methods as baselines for fair comparison.
A2: We subscribe the graph encoder in our model with a 2-layer GIN which is pretrained by GraphCL [3] and InfoGraph [4]. **The results are listed in Table 1 of the attached PDF.** As the results show, the graph SSL pretrained explainers are inferior to the proposed model after pretraining. As we surmise, it's because the graph SSL methods are general pre-training paradigm for GNNs, instead of specifically designing for the GNN interpretation problem.
>Q3: More empirical studies to investigate the impacts of the pretraining dataset.
A3: 1). To further investigate the impact of pre-training dataset size, we have added supplement experiments. Specifically, compared with PT-Motifs (80000 graphs) in our paper, we generate PT-Motifs-M with 50000 graphs and PT-Motifs-S with 10000 graphs. The results are listed as follows.
|Interpretation|BA-2Motifs|Mutag|
|-|-|-|
|PT-Motifs (80000)|99.33|99.81|
|w/o pretrain|93.19|95.29|
|PT-Motifs-M (50000)|98.91|99.06|
|PT-Motifs-S (10000)|97.07|96.28|
|Prediction|Molhiv|Graph-SST2|
|-|-|-|
|PT-Motifs (80000)|80.86|88.05|
|w/o pretrain|79.71|83.48|
|PT-Motifs-M (50000)|80.77|87.59|
|PT-Motifs-S (10000)|79.82|85.23|
The results show that even the PT-Motifs-S is able to outperform the model without pre-training. Moreover, we can notice that a large pre-training dataset can indeed improve the performance more significantly than that with small size.
2). It is possible that if the synthetic pre-training dataset is imbalanced, the performance improvement on downstream tasks will degrade, in terms of both the interpretation and prediction. To further investigate this issue, we have added supplement experiments on imbalanced pre-training dataset. Following existing works [5,6], to generate the imbalanced datasets, we sample the explanatory $G_e$ from uniform distribution, while the base $G_b$ is determined by $P(G_b) = b\times I(G_b=G_e) + (1-b)/4\times I(G_b\neq G_e)$. Therefore, we can manipulate b to control the imbalance degree and the imbalance degree is defined as $i=4b/(1-b)$. The results are listed as follows.
|Interpretation|BA-2Motifs|Mutag|
|-|-|-|
|Balanced|99.33|99.81|
|w/o pretrain|93.19|95.29|
|b=0.3, i≈1.7|98.91|98.46|
|b=0.5, i=4.0|96.15|97.10|
|b=0.7, i≈9.3|93.63|94.30|
|Prediction|Molhiv|Graph-SST2|
|-|-|-|
|Balanced|80.86|88.05|
|w/o pretrain|79.71|83.48|
|b=0.3, i≈1.7|79.42|85.34|
|b=0.5, i=4.0|77.18|83.14|
|b=0.7, i≈9.3|75.04|81.90|
The results demonstrate that when pretraining on an imbalanced dataset, the performance improvement is less significant than that on the balanced one, but still better than that without pretraining.
3). In fact, the distribution of our pretraining dataset is not always aligned with the downstream tasks. Generally, the pretraining process is not strongly coupling with the downstream tasks as in CV [5] and NLP [6]. Hence, there is no need to deliberately align the pretraining dataset with the downstream datasets. We need the fine-tuning process to align the pre-trained model with the specific downstream tasks. Thus, the quantity and quality of downstream dataset are important to the fine-tuned performance too. Additionally, when the distribution of our pretraining dataset is severely not aligned with the downstream datasets and causes the negative transfer issue, we can adjust the synthetic algorithm to align the downstream dataset, which is the main advantage of synthetic pretraining paradigm.
4). **We list the time cost to generate a synthetic graph dataset with different graph scales in Table 2 of the attached PDF.** The result shows that the time cost of graph generation is nearly a linear function of the average node numbers.
>Q4: The training details of the baselines.
A4: Here, we sketch two important baselines. For GNNExplainer, we tune the learning rate from (1,0.1,0.01,0.001) and the coefficient of the L1-norm from (0.1,0.01,0.001). The coefficient of the entropy regularization is set to the recommended value 1. For PGExplainer, we use the tuned recommended settings from [6], including the temperature, the coefficient of L1-norm regularization. The training details of the baselines will be added in the final version.
[1] Discovering Invariant Rationales for Graph Neural Networks. ICLR 2022.
[2] Parameterized Explainer for Graph Neural Network. NeurIPS 2020.
[3] Graph Contrastive Learning with Augmentations. NeurIPS 2020.
[4] InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. ICLR 2020.
[5] Momentum contrast for unsupervised visual representation learning. CVPR 2020.
[6] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT 2019.
---
Rebuttal 2:
Comment: Dear reviewer M6tM:
Thanks again for your insightful comments, which, we believe, are very important to improve our paper.
In the rebuttal and submitted one-page pdf, we have tried to answer your questions one by one.
If you have further questions, we are very happy to discuss them.
---
Rebuttal Comment 2.1:
Title: Thanks for your response
Comment: Thanks for the detailed response provided by authors and I want to express my appreciation for the additional illustrations and experiments. I decide to raise my rating to 5 (borderline accept) and make the final decision until the reviewer discussion phase. Thanks!
---
Reply to Comment 2.1.1:
Comment: Thanks again for your valuable comments and insightful suggestions that have allowed us to improve the manuscript. | Summary: The authors propose a pre-trained interpretable GNN named \pi-GNN that can distill universal graph structural patterns. \pi-GNN is pre-trained on a newly constructed synthetic graph datasets with ground-truth explanations and then able to generalize across different graph datasets and tasks. Technically, a structural pattern learning module is introduced to capture and fuse multiple structural patterns for generalizable graph representations. Next, a refining module based on hypergraph is proposed to incorporate the generalizable patterns with the local structural interactions. Extensive experiments demonstrate the superiority of \pi-GNN over the SOTA baselines in terms of both interpretation and prediction performance. Additionally, an inter-task experiment which evaluates the graph classification pre-trained model on node classification task strongly verifies the excellent generalizability of \pi-GNN.
Strengths: S1: The paper first studies the pre-training problem of interpretable GNN for generalizable graph interpretability, which is interesting and insightful to the community.
S2: The proposed method, i.e., \pi-GNN, is well-motivated and plausible, since the graph structure follows some universal structural patterns that is important to the graph interpretation problem. Specifically, the combination of multiple basic pattern learners and one integrated pattern learner is reasonable to provide a generalizable structural representation. The graph-to-hypergraph transformation is elegant to fuse the universal patterns and the local interactions via hypergraph message passing. In short, the architecture of \pi-GNN is easy to understand and the figures in the manuscript are clearly illustrated.
S3: The improvements shown in experiment are significant on almost all datasets with regard to both interpretation and prediction performance. In section 4.3, after pre-trained on graph classification task, the top-tier interpretation performance on node classification is astonishing and demonstrates the cross-task generalizability of \pi-GNN.
S4: Sufficient supplementaries are provided, including ablation study, hyper-parameter analysis, to probe into the effectiveness of the proposed modules and the suitable hyper-parameter of \pi-GNN. Moreover, a great deal of visualized explanation cases are reported for an intuitive understanding towards the interpretation and prediction of \pi-GNN.
Weaknesses: W1: It seems like some related works about interpretable GNNs are missing in section 5, such as CAL [1] and OrphicX [2].
W2: The subgraph selection process is a little obscure to me.
The authors may want to append further illustrations on how to select the explanatory edges and embrace the contribution score into gradient optimization.
[1] Yongduo Sui, Xiang Wang, Jiancan Wu, Min Lin, Xiangnan He, Tat-Seng Chua. Causal Attention for Interpretable and Generalizable Graph Classification. KDD 2022: 1696-1705
[2] Wanyu Lin, Hao Lan, Hao Wang, Baochun Li. OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks. CVPR 2022: 13719-13728
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Q1: The main question to me is the effectiveness of the pre-training strategy on interpretable GNNs. In what cases will this strategy improve the final interpretation and prediction performance? And how to avoid the negative transfer issue of the pre-training paradigm also needs more research efforts. Moreover, to consolidate the assumption that the universal patterns behind different tasks is common, the inter-task evaluation need to be conducted on some real-world datasets.
Q2: Will the SOTA baselines get improved by pre-training on the newly constructed datasets? There may need some modifications on the existing models to fit the pre-training paradigm, but I think it is profoundly influential to the community if the interpretable GNNs can be generally improved.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have listed some limitations in the supplementary material. I still expect more exploration and discussion on the effectiveness of the pre-training strategy as I mentioned above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: It seems like some related works about interpretable GNNs are missing in section 5, such as CAL [1] and OrphicX [2].
A1: Thanks for your suggestion. We will add the suggested references in the final version.
>Q2: The subgraph selection process is a little obscure to me. The authors may want to append further illustrations on how to select the explanatory edges and embrace the contribution score into gradient optimization.
A2: Thanks for your suggestions. Following existing works [3, 4, 5], the explainer will provide a mask matrix $M\in\mathbb{R}^{|V|\times|V|}$, where the element $M_{ij}$ indicates the importance of edge $(i,j)$. Afterwards, such a mask matrix can result in an attentive matrix $A^{att} = A\odot f(M)$, where $A$ is the adjacent matrix of the raw graph and the element $A^{att}_{ij}$ indicates the probability of edge $(i,j)$ belonging to the explanatory subgraph. Finally, based on the attentive matrix $A^{att}$, the explanatory graph is sampled according to edge probability. We will add more explanation about the subgraph selection process in the Appendix.
>Q3: The main question to me is the effectiveness of the pre-training strategy on interpretable GNNs. In what cases will this strategy improve the final interpretation and prediction performance? And how to avoid the negative transfer issue of the pre-training paradigm also needs more research efforts. Moreover, to consolidate the assumption that the universal patterns behind different tasks is common, the inter-task evaluation need to be conducted on some real-world datasets.
A3: Thanks for your comments and suggestions.
1). The pre-training strategy on the interpretable GNNs is significantly effective when the downstream datasets share some common structural patterns with the pre-training dataset. Therefore, a pre-training dataset that covers more structural patterns is more helpful to the downstream fine-tuning tasks.
2). As to the negative transfer issue, we notice that an imbalanced pre-training dataset tends to improve less significantly than a balanced dataset, in terms of both interpretation and prediction. We further investigate the impact of an imbalanced pre-training dataset. Following existing work [3, 4], each synthetic graph consists of one base subgraph $G_b$ and one explanatory subgraph $G_e$. To generate the imbalanced datasets, we sample the explanatory $G_e$ from uniform distribution, while the base $G_b$ is determined by $P(G_b) = b\times I(G_b=G_e) + (1-b)/4\times I(G_b\neq G_e)$. Therefore, we can manipulate the hyperparameter b to control the imbalance degree and the imbalance degree is defined as $i=4b/(1-b)$. The experimental results of imbalanced pre-training datasets are listed in the following tables.
|Interpretation|BA-2Motifs|Mutag|
|:-|:-|:-|
|Balanced|99.33|99.81|
|w/o pre-train|93.19|95.29|
|b=0.3, i≈1.7|98.91|98.46|
|b=0.5, i=4.0|96.15|97.10|
|b=0.7, i≈9.3|93.63|94.30|
|Prediction|Molhiv|Graph-SST2|
|:-|:-|:-|
|Balanced|80.86|88.05|
|w/o pre-train|79.71|83.48|
|b=0.3, i≈1.7|79.42|85.34|
|b=0.5, i=4.0|77.18|83.14|
|b=0.7, i≈9.3|75.04|81.90|
The results reveal that an overly imbalanced pre-training dataset (b=0.7, i≈9.3) may cause negative transfer issue (on Muatg, Molhiv, and Graph-SST2). Therefore, the pre-training dataset ought to be balanced and thus can mitigate the negative transfer issue to some extents.
3). In this work, we just conduct some preliminary experiments to investigate the inter-task generalization problem and more evaluations on the real-world datasets are necessary to consolidate our assumption about universal patterns. In the future work, we will systematically extend to the interpretation problem in node classification and link prediction task.
>Q4: Will the SOTA baselines get improved by pre-training on the newly constructed datasets? There may need some modifications on the existing models to fit the pre-training paradigm, but I think it is profoundly influential to the community if the interpretable GNNs can be generally improved.
A4: Thanks for your comment. It is promising that the SOTA interpretable GNNs will get improved by incorporating the pre-training process. But on the other hand, incorporating the pre-training and fine-tuning paradigm with the interpretable GNNs is indeed challenging. For a future direction, we will investigate how to introduce the pre-training process to the SOTA baselines in a general way instead of specified modification.
[1] Causal Attention for Interpretable and Generalizable Graph Classification. KDD 2022.
[2] OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks. CVPR 2022.
[3] Parameterized Explainer for Graph Neural Network. NeurIPS 2020.
[4] Towards Multi-Grained Explainability for Graph Neural Networks. NeurIPS 2021.
[5] Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism. ICML 2022.
---
Rebuttal 2:
Comment: Dear reviewer EENr:
Thanks again for your insightful comments, which, we believe, are very important to improve our paper.
In the rebuttal and submitted one-page pdf, we have tried to answer your questions one by one.
If you have further questions, we are very happy to discuss them.
---
Rebuttal Comment 2.1:
Title: Thanks for the rebuttal
Comment: Thanks for the rebuttal and detailed responses of the authors. I have carefully read the responses to my previous concerns. Generally, I am satisfied with their responses as most of my previous concerns, such as further explanation on the subgraph selection process, more discussions on the effectiveness evaluation of the model, and the generalization ability of the model are well addressed.
I think this paper makes clear contributions as this is the first try to build a more general pre-training GNN explainer. I believe this will motivate a lot of the following works.
---
Reply to Comment 2.1.1:
Comment: Thanks again for your positive opinion of our paper and your valuable comments. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful and constructive feedback. We have made point-to-point response to the comments of each reviewer.
Moreover, we report two supplemental experiments in the attached file.
Finally, we once again thank all reviewers for their insightful comments which are very helpful for improving the quality of our paper.
Pdf: /pdf/2a4ebe52aac5888fe8a39bd37cd99a8cf3bb8a33.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design | Accept (poster) | Summary: Scaling laws in LLMs have typically been used to derive compute optimal model sizes. In fact one of the initial scaling laws papers in language modeling has indicated that as long as the model size is kept constant, the model shape (corresponding to embedding dim, mlp ratio, number of heads, depth) are not as important for performance. This paper on the contrary discovers that in the case of a Vision Transformer, given an equivalent amount of compute it is indeed feasible to design parameter and inference cost optimal models. These models are competitive with the larger models or outperform them on different tasks. Based on these observations the paper presents qualitative insights for scaling individual shape dimensions of a vision transformer across domains. Further the models derived based on these insights are evaluated on several computer vision tasks.
Strengths: The paper empirically investigates the effect of optimizing shapes of vision transformers to yield competitive models at a fraction of parameter size. Given the parameter and inference cost efficiency this finding is quite impactful towards designing hardware aware vision transformers.
Furthermore, though the empirical investigation is expensive it is at a cost of significantly fewer experiments in comparison to previous scaling laws studies. The paper is very well written and clear in most parts.
Weaknesses: The insights derived in the paper are very important and impactful. However since this is more of an empirical investigation to derive scaling behaviours I find the paper lacks originality. Also the scaling methodology and behaviour would very likely change for different vision applications which also use transformers eg: superresolution, self-supervised learning. Scaling behaviors might also change for different transformer types (swin, deit etc). Deriving scaling laws for every application would mean repeating the analysis, which would incur significant computational cost. Furthermore since the JFT dataset and the code of the paper is not open-sourced the analysis and observations derived are not reproducible. For Zero shot transfer, Linear probes only and imgnet finetuning ViT-G/14 still seems to dominate in most cases, hence I am not very convinced if the representations learned by the SoViT-400m are indeed comparably or more effective than the ones learnt by ViT-G/14.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. For given parameter size, does optimizing shape i.e trading-off dimensions while parameter size is held constant help? This was not found to not be helpful in [Kaplan et-al](https://arxiv.org/abs/2001.08361) and it would be interesting to know the observations in this study.
2. How closely are the laws derived tied to the size of the pre-training dataset, would the observations be similar if the laws were studied on imagenet for example?
3. Could the authors report the total TPU-hours/cost of analysis?
4. Minor: Should scaling “vision transformations” be scaling “vision transformers”? Line 70
5. How does this analysis compare to NAS methods (eg: [AutoFormer ](https://openaccess.thecvf.com/content/ICCV2021/papers/Chen_AutoFormer_Searching_Transformers_for_Visual_Recognition_ICCV_2021_paper.pdf)) which automatically derive optimal shapes? What are the advantages and disadvantages between these?
7. How hyper-parameters are set for different shapes is unclear to me. Could you please clarify this?
8. What do the question marks in Table 1 indicate?
9. ViT-g/14 seems to work better for multitask decoding in table-3. Do the authors have an intuition of why that could be?
If all my concerns and questions are addressed appropriately I am willing to increase my score for the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of the work are not adequately discussed:
1. The compute cost of the analysis?
2. Reproducibility of the experiments?
3. Are there any implicit assumptions which may effect the laws derived empirically in the paper?
I encourage the authors to discuss these limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and careful review. Please see our response below:
- We disagree that our work lacks originality. We provide an approach based on scaling laws for inferring compute-optimal model shapes, which has not been done in the literature before. In addition, we introduce the functional form in Eq 3, which helps in reducing the number of experiments significantly. We also demonstrate that a combination of a star and a grid sweep is sufficient for inferring compute-optimal shapes, thereby reducing the cost of the analysis further.
- We agree that having different architectures on different domains may require repeating a similar analysis. However, the fact that different domains may result in different optimal shapes is not a limitation of our work per se, since our goal is to discover those optimal shapes, not alter them. We would like to emphasize, however, that the scaling exponents are similar across the two domains we studied. Hence, in terms of order-of-magnitude, the optimal shape is similar in both domains when the model is sufficiently large. We provide evidence to support this in our experiments by evaluating SoViT-400M in multimodal tasks, such as zero-shot classification, captioning, and VQA.
- Regarding open-sourcing the code, we use a publicly available codebase that was removed to preserve anonymity and will be included in the final version of the paper. In the Appendix, we do provide the full training configuration for our experiments and provide a full description of the star and grid sweeps.
- Optimizing the shape while keeping the parameter count fixed indeed helps. We demonstrate this in Figure 6 (leftmost figure). Here, both SoViT-150m and the baseline (denoted B-150m) have the same size, yet SoViT-150m performs better.
- In our analysis, we assume that no overfitting occurs. This is crucial for Eq 2 to be valid, otherwise increasing the compute t can increase the loss at some point. To ensure that there is no overfitting, the size of the pretraining dataset should be quite large. For this reason, we do our analysis on JFT-3B.
- The relation between TPU hours and GFLOPs is reported in Figure 4.
- Thank you for spotting the typo in Line 70. We’ll fix it.
- The same hyperparameters are used for all architectures. These are provided in the Appendix (e.g. see Appendix B.1 for supervised pretraining and Appendix C for multitask decoding).
- We will fix the question marks in Table 1 and replace them with dashes. Thanks for pointing this out.
- The difference between SoViT-400m and ViT-g/14 in Table 4 for multitask decoding is negligible in our opinion; compare it for instance with the difference between SoViT-400m with ViT-L/16 that is similar to it in size.
---
Rebuttal Comment 1.1:
Title: Clarification
Comment: Dear reviewer,
We thank you again for the insightful feedback and acknowledge the areas where clarification is required.
We would like to clarify that in places where we have not detailed how certain feedback will be incorporated, it's mainly because we are still deliberating on the best way to incorporate those suggestions. We are definitely taking all comments into account when revising the paper.
This includes, for example, improving the clarity of Figure 1, Figure 3, and Section 3. In addition, we plan to include further details about the CIDEr and log-perplexity metrics, what “equivalent-compute” means, a brief description of the shape dimensions, the meaning of the exponents b and c, a link to the code, and adding further discussions in Section 5.5 to highlight the role of the sequence length. We will address all of these points.
Thank you again for the constructive feedback, and for your suggestions to enhance the quality and readability of the paper. | Summary: The authors study the recent empirical insight that test performance follows a predictable power-law structure in terms of (optimally-allocated) compute and extend this notion to take into account the “shape” parameters of underlying model such as width, depth etc. They demonstrate that power-law behaviour can indeed be leveraged to design a strategy to explore the shape space, enabling a significant decrease in the amount of computation needed, compared to a naive grid search. The authors focus on the class of vision transformers and show that their discovered shape-optimal models match or outperform larger models (trained with same compute) while at the same time offering more efficient inference.
Strengths: 1. The idea to leverage the (predictable) powerlaw behaviour of test performance to determine the ideal shape of a network is very well founded, and it’s very surprising that nothing similar has been done before. The results show that is indeed worthwile to optimize for the shape of vision transformers, as one can apparently achieve similar performance with a significantly smaller model.
2. An important quality of this work which in my opinion goes beyond most previous work on scaling behaviour is that the predictability is actively leveraged to reduce the search space of compute optimal models. By devising the so-called star-sweep strategy, the amount of compute needed is strongly reduced (albeit still very large).
3. The experimental setup is very extensive and the discovered architecture is evaluated on a broad range of tasks, ensuring that its optimality is not just an artifact of optimizing for ImageNet downstream accuracy.
Weaknesses: The definitions of some terms in this paper are sometimes unclear or never really stated. The most prominent and important one is compute **t**. Its definition seems to switch from context to context, which makes it difficult to follow. As far as I understood, compute **t** refers to the total compute, i.e. how many samples/tokens, training epochs and FLOPs per forward pass are needed. In other parts of the text compute seems to be referred as the number of samples solely, e.g. in line 127, compute unbounded seems to mean infinite sample size. Similarly in line 133, how can compute be fixed if the model size can be scaled arbitrarily (and compute is a function of model size)? In general I find it confusing that equation (2) depends both on compute **t** and shape parameter x_k, while **t** iself is actually also dependent on $x_k$, i.e. $t(x_k)$. Wouldn’t it be cleaner to simply replace compute **t** by the number of samples? I would appreciate if the authors could elaborate on this. I don’t think this has any major implications for the results but a clarification would certainly enhance readability.
Similarly, a quick definition of width, MLP dimension and depth could be helpful for readers less familiar with the details of vision transformers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How does each shape parameter (i.e. width, MLP dim and depth) contribute to the total number of FLOPs for say, a single forward pass? Is it for instance “cheaper” to make a ViT wider compared to adding another layer? What about wall-clock time and memory? It would be nice to at least have an approximate understanding of how the FLOP count is affected by these shape dimensions.
2. Something that is maybe clear and I missed it, but were the ViTs that you compare against, i.e. VIT-G/14 and ViT-g/14, considered “compute-optimal” before this work, and thus a “valid baseline”?
3. How does flexifying the architecture rule out that other patch sizes could be shape-optimal? The corresponding section is very short unfortunately (for space reasons I assume) but it would be great if the authors could expand on this.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and careful review. Please see our response below:
- Compute is defined in terms of FLOPs throughout the paper. However, when the architecture is fixed, compute becomes proportional to the number of seen examples. That’s why in Line 127, we refer to infinite data as compute-unbounded.
There is indeed a dependence between the architecture and its compute as you mention. However, we can always treat them separately by first fixing the architecture and, then, training for the chosen amount of compute (in FLOPs). This is why we can have both $t$ and $x$ in Equation 2. The reason we do not use the sample size is because large models are more sample-efficient (Zhai, et al. 2022) so if the goal is to minimize compute by minimizing the size of the training data, the solution is to (trivially) scale up the size of the model indefinitely.
- We will add a brief description of the shape dimensions in the revised version of the paper.
- The impact of the model shape on compute is captured by the exponent $b$, which are shown in Figure 5. These are automatically taken into account when optimizing the shape for compute in Eq 4. Note that in Eq 4, increasing the value of $b$ would decrease the scaling exponent $s$.
- The baselines ViT-g/14 and ViT-G/14 are not compute-optimal. This is what we demonstrate in this work. We use them because they are widely used in the literature, and hence serve as valid baselines.
- Flexifying does not rule out that other patch sizes could be compute-optimal. Our intent is merely to demonstrate that SoViT-400M continues to perform quite well for other patch sizes when it is flexified.
---
Rebuttal Comment 1.1:
Title: Clarification
Comment: Dear reviewer,
We thank you again for the insightful feedback and acknowledge the areas where clarification is required.
We would like to clarify that in places where we have not detailed how certain feedback will be incorporated, it's mainly because we are still deliberating on the best way to incorporate those suggestions. We are definitely taking all comments into account when revising the paper.
This includes, for example, improving the clarity of Figure 1, Figure 3, and Section 3. In addition, we plan to include further details about the CIDEr and log-perplexity metrics, what “equivalent-compute” means, a brief description of the shape dimensions, the meaning of the exponents b and c, a link to the code, and adding further discussions in Section 5.5 to highlight the role of the sequence length. We will address all of these points.
Thank you again for the constructive feedback, and for your suggestions to enhance the quality and readability of the paper.
---
Rebuttal Comment 1.2:
Comment: I thank the authors for their explanations. I only have a remaining question regarding the definition of compute:
**Compute:** Maybe there was a misunderstanding, I'm not suggesting to equate compute $t$ with sample size $N$, but rather have a formula in the Chinchilla style, i.e. $f_k(x_k, N) \propto A_kx_k^{-\alpha_k} + B_kN^{-\beta_k}$ while compute $t \propto N g(x_1, \dots, x_K)$ is fixed where $g(x_1, \dots, x_K)$ determines the number of FLOPs for shape configuration $(x_1, \dots, x_K)$. Of course one could apply your refined formulation to the above law. Or does such an approach still not work? I might still be missing something.
---
Reply to Comment 1.2.1:
Title: Response
Comment: Thank you for the clarification and the suggestion.
The Chinchilla style formula is a special case of the one we use in (2). In particular, if one has $f_k(x_k, N)\propto A_kx_k^{-\alpha_k} + B_kN^{-\beta_k}+\varepsilon_k$, then Equation 2 would also hold; e.g. by setting $\gamma=0$ and writing: $t \propto x_k^{b/c} N$ (since compute $t$ is directly proportional to the data size $N$ and $t=0$ whenever $x_k=0$). In our experiments, we found this to be a good approximation indeed. In particular, setting $\gamma=0$ results in values of $b$ and $c$ in which the relation $t \propto x_k^{b/c} N$ holds approximately. The scaling exponent $s$ remains relatively unchanged in either case (e.g. in depth, it becomes 0.43 instead of 0.45. The reason it does not change much is because $\gamma\ll 1$ in the first place, as would be expected from Equation 3 and the way we construct the star sweep.
Thank you for bringing this up. We will add a discussion about it to the paper. | Summary: This paper proposes a novel and empirical take on the design of large vision transformers (ViTs), in the continuity of a previous paper aiming at optimizing the training of transformers.
Whereas the previous paper was aiming to optimize a single parameter (optimal model size) given a fixed training budget, this paper goes one step further and attempts to discover the optimal ViT architecture (that is 3 parameters: token dimension, depth, MLP size) given a fixed training budget.
The paper empirically solves this question via numerous experiments, while answering related questions and distilling interesting insights on the way.
Strengths: ### Writing
- The introduction is well written and clear
- quite complex and unclear at times
### Method
- The optimization method is novel and an improvement w.r.t. the "optimal model size" paper, as it can deal with multiple hyper-parameters while requiring much less experiments.
- The insights about the choice of the joint functional form (eq. (2)) are interesting and well-grounded
- Definitely a valuable paper for the community, even though some aspects could be improved
### Experiments
- multiple experiments on several benchmarks support the initial claim that a smaller-but-optimal architecture achieves as well as vanilla SotA architectures
- Experimental findings are useful and valuable (Section 4.1)
- it is nice to see that other tasks than image classification are being experimented with, and that the findings hold for a wide variety of downstream tasks
Weaknesses: ### Exposition
- line 57: "Figure 1: The MLP dimension is scaled faster than depth, which in turn is scaled faster than width." --> I do not find this to be clear, looking at Figure 1
### Method
- The scaling parameters defined in eq (4) seems central in the analysis. Unfortunately, I'm not sure to understand exactly what it is and where it comes from. In particular, I don't understand why it should be invariant to the choice of the shape dimension (see Figure 5)
- About the difference between small and large models
- it is claimed that "in small models, an optimal shape in one domain is not necessarily optimal in others." (Figure 3)
- How are "small" and "large" models exactly defined? This seems like a very convenient subjective definition.
- Why does the model size matter so much when it comes to the optimal architecture across application domains?
### Experiments
- Figure 6:
- "while keeping compute fixed." --> how exactly? Does this mean other model hyper-parameters are decreased to keep the same model complexity? Or does this mean the training is short/longer?
- the supposedly optimal model is clearly not optimal, since increasing the depth or MLP size results in (slightly) better performance. Since there experiments are on a relatively "small" model, according to the paper, does this mean the claims do not hold anymore? (see above)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I'm ready to upgrade my rating if the authors clarify some of the points mentioned above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and careful review. Please see our response below:
- In Figure 1, we use different axes for different dimensions. This may make it difficult to see how the MLP dimension is scaled faster than the others. Note, for example, that going from 1T to 100T GFLOPs corresponds to an increase in MLP by around a factor of $\times 3$. For depth, on the other hand, the increase is around a factor of $\times 2$. Generally, the scaling exponent for the MLP dimension is $\approx 0.6$, which is larger than depth or width.
- Equation 4 is obtained by setting the derivative in (3) to zero and solving for $x$. The scaling exponents need not be invariant to the choice of the dimension as you mention. However, based on prior theoretical works that explain scaling laws via a space partitioning argument (Bahri, et al. 2021; Hutter, 2021; Sharma and Kaplan, 2022), we would expect the exponent $c$ to be roughly similar across all dimensions, which seems to be indeed the case in Figure 5. So, only $c$ is expected to be invariant.
- Regarding the model size, the terminology “small” and “large” is meant to convey in simpler terms the following results. In Figure 3, we observe that the compute-optimal model for classification highlighted in blue is not compute-optimal for image-to-text tasks as shown in the rightmost figure. For this reason, a compute-optimal shape in one domain may not be optimal in others. However, we also show that the scaling exponents are similar. Hence, in terms of order-of-magnitude, the optimal shape is similar in both domains when the model is sufficiently large. We do provide evidence to support this in our experiments by evaluating SoViT-400M in multimodal tasks, such as zero-shot classification, captioning, and VQA.
- We keep compute fixed by changing the training duration, making it longer for smaller models and shorter for larger models, such that the total FLOPs is the same.
- Regarding the optimality of the shape and Figure 6, we clarify this in Lines 115-116. Due to modeling assumptions, approximations, and the finite possible number of experiments, we can only approximate the optimal shape. In Figure 6, we see that deviating from the predicted optimal width, for example, does degrade performance. For other dimensions, the performance does not change significantly when they are increased but it does degrade when they are decreased. We do believe that this highlights our argument that one can identify a near-optimal shape using the recipe we propose.
---
Rebuttal Comment 1.1:
Title: acknolegment
Comment: I have read the rebuttal, and I must say I am relatively disappointed by the lack of willingness from the authors to improve their paper based on my and other reviewer suggestions. Overall, I see no promise to improve the manuscript except for fixing typos and adding references.
I know what is a plot and an axis, and I know how to read them, thank you. My role as a reviewer is not just to accept or reject a paper, it is also to help improve the paper quality and readability, for the sake of readers. Arguably, Figure 1 is a good example of my point: it is cited in the paper line 56, followed by the explanation "The MLP dimension is scaled faster than depth, which in turn is scaled faster than width", but this does not show at all at first sight when someone looks at Figure 1! And not on 2nd sight either. It actually requires some calculations based on the axes' tick labels to realize than, indeed, MLP dimension is scaled faster than depth, etc. The figure is completely counter-intuitive in that regard. Is it too much to ask for improving it?
Same comments for the scaling exponents. Section 3 is really technical, and it would't hurt to explain things a bit more (as pointed out by other reviewers too).
Same comments for other things the reviewer found difficult, unclear or counter-intuitive.
Also, I am not satisfied by the answer regarding the difference between large and small models. I know that asymptotically, shape will not matter when the model grow sufficiently large. What would be useful is to be able to characterize when this happens, because so far the paper does not answer this question at all. By the way, Figure 3 is, again, not super clear nor intuitive (and what does all the gray circle exactly denote?)
---
Reply to Comment 1.1.1:
Title: Clarification
Comment: We appreciate your prompt and insightful feedback and acknowledge the areas where clarification is required that are highlighted by the reviewers.
In places where we have not detailed how certain feedback will be incorporated, it's mainly because we are still deliberating on the best way to incorporate those suggestions. We are definitely taking all comments into account when revising the paper.
In reference to Figure 1, in particular, we plan to make it easier to see the rate of growth of each dimension as you suggested. Possible approaches might include adding a second right-axis to display the percentage increase from a fixed reference point (such as 1T tokens), or adding labels within the plot to indicate specific milestones, like when a dimension is doubled or tripled.
The same holds for Section 3 and the other places suggested by the reviewers, such as Figure 3, details about the CIDEr and log-perplexity metrics, what “equivalent-compute” means, a brief description of the shape dimensions, clarifying the meaning of the exponents b and c, a link to the code, and adding further discussions in Section 5.5 to highlight the role of sequence length. We will address all of these points.
Regarding the model size, we will rephrase that statement to say that the optimal shapes can be initially different but they converge as the model size increases, without referring explicitly to the terms “small” or “large” since that can be subjective as you suggested.
Thank you again for the constructive feedback, and for your suggestions to enhance the quality and readability of the paper. | Summary: This paper introduces an efficient approach to investigate the scaling laws for compute-optimal model shapes, such as model width and depth. It proposes a shape-optimized vision transformer called SoViT.
A comprehensive evaluation across various tasks highlights the effectiveness of the proposed architecture. SoViT-400m/14 achieves 90.3% fine-tuning accuracy on ILSRCV2012, surpassing ViT-g/14 with a larger model size. This study makes a valuable contribution to the design of vision transformers and is expected to have a certain degree of impact in this era.
Strengths: 1. The paper is well-written, with well-defined formulas and sufficient supporting materials.
2. The three shape parameters are analyzed well. The paper proposes star sweep and grid sweep strategies to investigate the scaling laws avoiding the expensive search cost.
3. Extensive experiments are implemented to validate the method, including image classification, multitask decoding, and segmentation tasks.
Weaknesses: 1. Figure 3. is challenging to understand, and it may benefit from more elaborate annotations and explanations to clarify the significance of certain data points within the figure.
2. For the experiments, SoViT-400m/14 can surpass ViT-g/14 for the image classification task. However, SoViT-400m/14 does not show a significant advantage over ViT-g/14 and ViT-L/16 in other tasks, such as the OCR and VQA. The detail of metrics such as Log-PPL and CIDEr in Table 4. should be explained in the paper.
3. A few typos in the paper, such as x_{k} in A.1 Quasiconvexity Proof.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Figure 6. indicates that deviating from the optimal depth/MLP configuration does not lead to performance degradation. Furthermore, the evidence supporting the superiority of the optimal shape is limited, and it seems that only three experiments (33%<, 33%>, 200%) were conducted, as shown in the Figure.
2. The compute-optimal model shape is different among tasks. How to search for an optimal model which can be applied to various downstream tasks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and careful review. Please see our response below:
- In Figure 3, each dot corresponds to a model architecture pretrained on 600M examples and evaluated on one downstream metric.
The metrics from left to right are: 5-shot, 10-shot, and 25-shot (all in ImageNet). In the rightmost figure, the metric is an average log-perplexity score across a mixture of four tasks including VQA and captioning, as we describe in Section 4.2.
In the first three figures, the downstream metrics are for classification and we observe that the compute-optimal model highlighted in blue is compute-optimal in all three cases because it lies in the efficient frontiers in all three cases. But, it is not compute-optimal for image-to-text tasks as shown in the rightmost figure. For this reason, a compute-optimal shape in one domain may not be optimal in others. We will clarify this in the paper.
- In Table 4, we show that SoViT-400M is comparable to ViT-g/14 in multimodal tasks. As you mention, it does not perform strictly better but performing equally well is itself a significant gain, because SoViT-400M is much smaller and less costly (GFLOPs, Images/Core/s) than ViT-g/14, as shown in Figure 2.
- We did not explain log-perplexity and CIDEr because they are standard metrics in the literature (e.g. [1] and Section 9.3.2 in [2]). We will include references for their definitions in the revised version of the paper.
- Thanks for spotting the typo in A.1. We’ll fix it.
- Regarding the optimality of the shape and Figure 6, we clarify this in Lines 115-116. Due to modeling assumptions, approximations, and the finite possible number of experiments, we can only approximate the optimal shape. In Figure 6, we see that deviating from the predicted optimal width, for example, does degrade performance. For other dimensions, the performance does not change significantly when they are increased but it does degrade when they are decreased. We do believe that this highlights our argument that one can identify a near-optimal shape using the recipe we propose.
- The reason we only report <33%, >33%, and >200% in Figure 6 is because we do observe a drop in all cases when using <33% so it is a sufficient demonstration. On the other hand, increasing the dimension by >33% or >200% both give similar results, so we expect similar results when using, for example, >100%.
- It is true that the compute-optimal shape is different across domains when the model is small. The scaling exponents, however, are the same in both domains: (1) image classification and (2) image-to-text, as discussed in Section 4.2 . This means that in terms of order-of-magnitude, the optimal shape is similar in both domains when the model is sufficiently large. We do provide evidence to support this in our experiments by evaluating SoViT-400M in multimodal tasks, such as zero-shot classification, captioning, and VQA.
[1] Vedantam, R, et al. "Cider: Consensus-based image description evaluation." CVPR, 2015.
[2] Zhang, A., Lipton, Z. C., Li, M., & Smola, A. J. Dive into deep learning. 2021.
---
Rebuttal Comment 1.1:
Title: Clarification
Comment: Dear reviewer,
We thank you again for the insightful feedback and we acknowledge the areas where clarification is required.
We would like to clarify that in places where we have not detailed how certain feedback will be incorporated, it's mainly because we are still deliberating on the best way to incorporate those suggestions. We are definitely taking all comments into account when revising the paper.
This includes, for example, improving the clarity of Figure 1, Figure 3, and Section 3. In addition, we plan to include further details about the CIDEr and log-perplexity metrics, what “equivalent-compute” means, a brief description of the shape dimensions, the meaning of the exponents b and c, a link to the code, and adding further discussions in Section 5.5 to highlight the role of the sequence length. We will address all of these points.
Thank you again for the constructive feedback, and for your suggestions to enhance the quality and readability of the paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections | Accept (poster) | Summary: The paper proposes a novel method for improving zero-shot classification of VL models, in an unsupervised manner. This is done without additional visual labeled examples, yet with additional unlabeled examples. They rely on LLM to generate a dataset of text describing the desired classes. Then, they train a text classifier on top of the VL textual encoder using the generated dataset. As a second stage, they use a set of relevant unsupervised images as well as a set of designed augmentations, to finetune the classifier and visual prompts, adjusting it to the VL models image representations. In an extensive evaluation, the authors show a significant performance enhancement, and ablate their method methodically.
Strengths: - The paper is clearly written and easy to follow.
- The performance enhancement shown by LaFTer is very significant and consistent across most test cases.
- Although inspired by previous works, the idea of tunning the visual encoder of VL models using text supervision is highly interesting.
Overall, I believe the idea of the paper is very interesting, and the results are a significant improvement over CLIP.
Weaknesses: My main concern is the performance of CLIPPR baseline. In most cases, it has been found to be less successful than the original CLIP, which is not intuitive with the method. Could the authors explain this gap please?
One other minor concern is the statement in the abstract about LaFTer being the first to reduce the gap from the supervised baseline. Table 1 shows LaFTer is not the first. Table 2 shows that it does not close the gap for about 50% of the cases, using only few-shot supervision. I believe this is misleading statement.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Could the authors please verify if the experiments denoted by LaFTer* are just the second stage of LaFTer using CLIP as the initial text classifier?
If so, I think these results are more suitable to appear in the ablations section (Table 4).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The method requires access to a set of unlabeled images from the same distribution.
- The method is limited by the quality of the visual representation of the VL model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort spent in reviewing our paper. In the following, we provide a response to the questions raised in the review.
**Performance of CLIP-PR baseline.** The CLIP-PR reported image classification results only on CIFAR-10 and ImageNet.
We used the official codebase provided by the CLIP-PR authors to evaluate their method on all the additional
datasets reported in our paper. We used the values for all hyperparameters as recommended by the CLIP-PR authors in
their official release.
We would also like to note that the zero-shot classification accuracy of the original (non-adapted) CLIP baseline
reported in CLIP-PR is lower than we observed in our experiments and reported in our paper.
For example, in Table $3$ of the CLIP-PR paper, they report $57.59$% CLIP zero-shot Top-1 Accuracy (%) for
ImageNet, whereas, we measured CLIP to obtain $61.9$% (Table $1$ of our submitted manuscript) while using the same ViT-B/32 backbone from
OpenAI as reported to be used by CLIP-PR and observed from their official code.
Similarly, for CIFAR-10 we obtain zero-shot accuracy of $88.8$% for base CLIP, while CLIP-PR reports $85.17$%.
**Confusion about LaFTer being the first to reduce the gap from the supervised baselines.** Thank you for
pointing it out, and we agree that the referred statement (in its current form) might lead to confusion.
However, the purpose was to highlight that LaFTer is the first method to jointly combine two concepts in order to
effectively finetune Vision-Language models in an unsupervised manner:
- Substituting labeled visual instances of the categories by generating text through Large Language Models (Learning an Image Classifier using Text, Section 3.1).
- Finetuning the visual encoder in an unsupervised manner by using unlabeled image data (Unsupervised Finetuning on Target Domain Images, Section 3.2).
It is true that CLIP-PR and UPL use unlabeled image data for unsupervised finetuning of Vision-Language models, however,
LaFTer is the first method to also take advantage of auto-generated text from the Large Language Models which
substitutes visual instances.
We will make the referred statement in the abstract concise and more clear in the updated version of the manuscript.
**Move LaFTer$^\*$ results to the ablation section.** Thank you for the suggestion.
Analyzing the tables again, we agree that it will be more suitable to move LaFTer$^*$ results to the ablations section
(Table $4$).
We will rearrange the results in the updated manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
CLIPPR Baseline Results:
There is a mismatch between the setting of CLIPPR and CLIP. As mentioned in the CLIPPR paper they used just a single prompt, where in CLIP a set of handcrafted prompts was used. I believe the setting of CLIPPR should be adjusted for a fair comparison. This concern remains unaddressed.
---
Reply to Comment 1.1.1:
Title: On adding multi-prompt CLIP-PR baseline
Comment: Thanks for pointing this out! To further address the reviewer's concern with the CLIP-PR single prompt evaluation in Table 1, we have also evaluated CLIP-PR in _multi-prompt_ setting (using the CLIP hand-crafted templates) as suggested by the reviewer. _Multi-prompt_ setting improves CLIP-PR result by 2.9% (averaging over all the evaluated benchmarks presented in Table 1 of our paper). But even in this setting, LaFTer has a 9.0% average advantage over the _multi-prompt_ CLIP-PR. That is, LaFTer average over all the evaluation benchmarks of Table 1 is 9.0% higher than the average of _multi-prompt_ CLIP-PR with positive gains for all the benchmarks (gains ranging between 1% and 22.6%). We provide the detailed results in the table below and will add the _multi-prompt_ CLIP-PR baseline and its results to the revised version of the paper (Table 1), thanks for this suggestion!
| | IN | C10 | C100 | EuroSat | DTD | Caltech |
|------------------|-------|--------|--------|---------|-------|---------|
| CLIP-PR (Single) | 60.4 | 89.3 | 63.2 | 44.2 | 40.1 | 84.8 |
| CLIP-PR (Multi) | 61.1 | 89.5 | 65.3 | 51.3 | 45.1 | 88.7 |
| LaFTer | 64.2 | 95.8 | 74.6 | 73.9 | 46.1 | 93.3 |
| | **UCF** | **Flower** | **SUN** | **IN-A** | **IN-R** | **IN-S** |
| CLIP-PR (Single) | 57.9 | 57.7 | 54.7 | 11.6 | 38.6 | 54.1 |
| CLIP-PR (Multi) | 59.7 | 60.1 | 57.0 | 15.2 | 40.8 | 56.9 |
| LaFTer | 68.2 | 71 | 64.5 | 31.5 | 42.7 | 72.6 | | Summary: This paper proposes a finetuning approach for Vision-Language models that does not require any labels. It begins by demonstrating the transfer of information between different modalities and training a classifier using natural language inputs. This classifier achieves the successful classification of visual data. Furthermore, it utilizes this pre-trained classifier that relies solely on text inputs in our pseudo-labeling pipeline. This enables it to effectively and efficiently fine-tune the Vision-Language models with fewer parameters.
Strengths: 1- The proposal introduces a label-free approach to zero-shot learning, capitalizing on the benefits of pretrained LLM models. Additionally, it enhances the model's performance by finetuning it in an unsupervised manner. This novel approach presents a new direction for zero-shot learning research, eliminating the need for labeled datasets even for seen classes.
2- The proposed method extends its experiments to include a few-shot framework and demonstrates a noteworthy improvement compared to the baseline approaches.
3- The experiments are conducted on twelve (12) diverse datasets, encompassing coarse-grained, fine-grained, and natural scene datasets. A comprehensive ablation study is conducted to demonstrate the effectiveness of the proposed model and the contribution of its individual components.
Weaknesses: 1- Upon analyzing Table-1, it is evident that the proposed model exhibits lower performance for the Flower and SUN datasets, both commonly utilized in zero-shot learning and are fine-grained datasets. This observation raises surprise as the proposed model struggles specifically with fine-grained datasets.
2- The visualization of embeddings obtained from both the text encoder and image encoder would provide a clearer understanding of the effectiveness of the pretrained CLIP (LLM) model. Additionally, generating images based on textual descriptions of classes would offer valuable insights into how the proposed model operates within the zero-shot framework.
3- The distinction between the weakly-augmented view and the strongly-augmented view is unclear when it comes to unsupervised fine-tuning. What are the benefits of utilizing the weakly-augmented view for pseudo-labels instead of the strongly-augmented view?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please answer all my concerns raised in the weaknesses section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: In this paper, the authors acknowledge a limitation of their proposed model. They state that in their current work, they employed a straightforward neural network-based classifier and consider exploring more complex neural networks as part of future work. However, it is worth noting that from my observation of Table-1, the proposed model also faces challenges when applied to fine-grained datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort spent in reviewing our paper. In the following, we provide a response to the questions raised in the review.
**Performance on fine-grained datasets.** On datasets like Flower-102 and SUN-397 our method improves the base
CLIP model by $4.4$% and $3.7$% respectively.
However, as compared to the other unsupervised finetuning baseline UPL, our LaFTer slightly lags behind
on Flowers-102 and on SUN-397 (while significantly improving over UPL on average across all tested datasets).
One potential reason could be the descriptions generated from the LLM for the fine-grained categories of these (fine-grained) datasets.
To probe into this, we consider the example of Flowers-102 dataset.
It contains categories such as,`love in the mist, mexican aster, alpine sea holly, ruby-lipped cattleya`,
which are names of flowers.
It could be challenging for an LLM to provide descriptions for these categories without having any prior knowledge
about the _parent category_ of these classes.
In order to generate more exact descriptions about these finegrained classes, we slightly modify the prompts to the
LLM (GPT) and add the parent category (flower) in the prompts.
A couple of examples for the modified prompts are as follows (please refer to the supplementary Section 3.1,
for the original list of prompts):
- Describe what the _flower_ type {category} looks like.
- How can you identify the _flower_ type {category}?
By generating descriptions of the finegrained categories by prompting the LLM with the parent category _flower_ we get
an improvement of $1.4$% ($71.0$% $\to$ $72.4$%) for our LaFTeR final classification accuracy for this dataset.
The improved results show a gain of $0.9$% over the UPL baseline (similarly, we gain $0.3$% over UPL for SUN-397), and once again demonstrate the ease with which
the text-only pre-training proposed in LaFTer can facilitate further customization to downstream tasks and domains
via small (essentially cost-free) changes in the prompts.
By adapting the LLM instructions to include the _parent category_, we can easily generate better-targeted
descriptions for any fine-grained dataset.
Notice that we also experiment with _targeted prompts_ for the out-of-distribution dataset ImageNet-Rendition
and show a considerable performance improvement.
We discussed those results in our response to `Reviewer YPJ1`.
At this point, we leave the exploration of _targeted prompts_ towards downstream datasets as an interesting and
seemingly very promising future work direction.
**Visualization of embeddings.** In Figure 2 of the global response, we provide TSNE projections of the
visual and text embeddings, before and after adaptation for the $10$ classes in the EuroSAT dataset.
Analyzing the TSNE projections, we observe that the adaptation with our LaFTer
results in more pronounced category clusters with larger inter-class distances, and also that after applying LaFTer the class-label text
embeddings are better aligned with these category clusters.
**Benefits of weakly-augmented view for pseudo-labels.** In the context of self-supervised learning with pseudo-labels,
several approaches have introduced the concept of consistency regularization by deriving pseudo-labels from
weakly-augmented data samples, as demonstrated in works like FixMatch [1].
The rationale behind utilizing weakly-augmented views for generating pseudo-labels stems from the expectation that the
model's predictions on these mildly augmented versions will exhibit greater confidence compared to
predictions from strongly augmented counterparts.
The substantial alterations introduced by strong augmentations may distort the inherent data structure of the
image, potentially introducing noise to the predictions.
Conversely, when pseudo-labels are generated from weakly-augmented views, they tend to be less noisy,
facilitating smoother and more effective guidance of the learning process.
[1] FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence, Sohn et al., NeurIPS 2020. | Summary: This paper proposed a new approach to improve zero-shot vision recognition capability which leveraged the common embedding space for image and text. Specifically, with a pre-trained VLM (vision-language model), the authors leveraged LLM to automatically generate multiple language prompts for training a text-based classifier, which can be adapted for vision classification thanks to the shared embedding space. The classifier is further applied to augmented images to provide pseudo labels for finetuning the visual encoder in a label-free manner (pseudo labels generated from the text classifier). The authors perform extensive experiments to validate the effectiveness of the proposed training paradigm and on multiple benchmarks the model achieved significant performance gain.
Strengths: 1. The proposed method is intuitive and theoretically sound. The shared image-text embedding space naturally serves as a bridge for applying text-trained classifiers on image domains. The simple idea turns out to be very effective and brings significant performance boost across multiple datasets.
2. The conclusion is supported by extensive experiments with various ablation studies, showing many interesting patterns.
3. The paper is well-written and in good shape, it's easy to read.
Weaknesses: 1. Section 3.2 is named as "unsupervised finetuning" but technically since the model is trained with smoothed cross entropy loss, it's not truly "unsupervised", just that the labels are from text-based classifier (thus "pseudo labels"). The name is somewhat misleading as audience may get confused with unsupervised approaches.
2. CLIP is pre-trained with massive data thus demonstrating good zero-shot capability. Although this paper experimented on multiple benchmark datasets, the images of these datasets are all natural thus similar in style. The proposed approach achieves significant performance gain without real labels but the assumption is that the pseudo labels are relatively accurate due to image consistency. Therefore it is better to emphasize the point at least as a potential limitations. It might be better to be more careful with some claims e.g. L317 regarding the cross-modality transfer capability.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Some other recent work used similar ideas for finetuning VLMs so maybe it's worth mentioning and provide a comparison. For example, [1] used a similar idea to generate multiple text and image prompts for finetuning VLMs domain adaptation.
[1] https://arxiv.org/abs/2306.16658
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The paper has a dedicated section discussing about the limitations with the single linear layer of text classifier. Some more important limitations may need emphasizing as I suggested in the weaknesses section that the general assumption of the effectiveness of the proposed approach is the high quality of the underlying VLM model. No obvious potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort spent in reviewing our paper. In the following, we provide a response to the questions raised in the review.
**Unsupervised finetuning naming convention.** Thank you for pointing it out. In our manuscript, we refer to the second
stage of LaFTer (Section 3.2 of the originally submitted manuscript) as 'Unsupervised Finetuning on Target Domain Images'
because we do not use any _ground truth_ labels from the training samples of the downstream datasets.
The supervision in the Cross-Entropy Loss is generated by the output predictions of the model itself.
However, we do acknowledge the source of confusion, which might be caused
because 'Unsupervised Finetuning' has been used in a variety of contexts, more commonly for
'Unsupervised Representation Learning'. Following the suggestion, we will consider changing the name of Section 3.2 to
'Label-Free Finetuning on Target Domain Images'.
We would also be happy to incorporate any further suggestions which may be provided during the discussion period.
**Distribution of images of the downstream datasets.** In addition to experimenting with our LaFTer on datasets
containing images collected in the real-world (e.g., CIFAR, ImageNet, SUN-397, UCF-101), we also tested our method
on out-of-distribution variant of the original ImageNet dataset (ImageNet-Adversarial), which contains adversarial images for the original categories in the ImageNet dataset, as well as on satellite data (EuroSat), which has images from space that are
likely very rare in CLIP web train data. Please note that our method shows considerable performance gains by adapting
to the out-of-distribution images as well. For example, for EuroSat we obtain a performance gain of $28.8$% over the
original CLIP model. However, we do agree that since CLIP is trained on a large corpus of data scraped from the
internet, it is extremely challenging to know the _true_ data distribution of the training set and hence, it might
be the case that CLIP has seen similar images to the out-of-distribution variants during its large-scale pre-training.
We will emphasize this point and also rephrase $\text{L}317$ in the limitation section, of the revised manuscript.
**Recent work for finetuning VLMs.** Thank you for highlighting this relevant work.
Please note that it was released on ArXiv on $29^{th}$ of June, 2023 (long after the NeurIPS deadline).
However, we will discuss (and compare, provided the code is open-sourced) this work in the related work section
of our revised manuscript.
---
Rebuttal Comment 1.1:
Title: Thanks for the response!
Comment: I've read the response from authors and are mostly satisfied with the answers. I don't have other questions at this moment and would like to keep my rating for acceptance. | Summary: This paper proposes a new method to improve the zero-shot classification accuracy for a pre-trained vision-language (VL) model. By leveraging the shared embedding space between the vision and language modalities, and relatively accessible text data generated by large language models (LLMs), the proposed method trains a classifier on the embedding of generated texts for the zero-shot classes, which can also work on the image features. The proposed method also includes a pseudo-label strategy using the trained classifier to augment unlabeled image data. The paper shows compelling performance improvement over the original VL model and other non-label fine-tuning methods.
Strengths: 1. Overall I think it is an interesting idea. By using the shared embedding from the two modalities, it generates relatively "cheap" data by using an LLM to train a classifier directly for the target classes. This classifier further gives a better "starting point" for the pseudo-labeling in the next stage.
2. The evaluations on the benchmark datasets show considerable improvements over other label-free methods. It also outperforms 1-shot fine-tuning methods on certain datasets.
Weaknesses: 1. What bothers me the most is that the performance of only using Llama descriptions is quite underwhelming (Table 3). It has a big gap to only using class names. Even the pure GPT description is only 1% better than class names on average. I think the failure of Llama needs more explanations. Also, these results might imply: 1) the generated texts are not that effective; 2) you need a really good LLM (GPT3 or better) to generate decent text descriptions to make this method work. IMO, both points will limit the contribution and impact of the proposed method.
2. I feel the diversity and quality of the generated texts will be very important in the proposed approach, as it will affect the generalization ability of the trained classifier. It is better to have some ablations on these aspects.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: LLM can just generate random text descriptions of the class. For example, it might generate the history of an object which is not very useful in visual classification. Did you consider imposing any constraints to make the LLMs generate only visually meaningful texts?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: To me, the biggest limitations are
1. The method might rely on a super good LLM which might not be accessible in many real-world scenarios.
2. The generated text for visual classification might not be visually meaningful.
No negative societal impact was observed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort spent in reviewing our paper. In the following, we provide a response to the questions raised in the review.
**Failure of Llama needs more explanation.** While comparing the descriptions of CIFAR-10 classes generated by Llama and GPT respectively, we found the following differences which could have an effect on the eventual classification performance.
- In most cases, GPT mentions the class name in its descriptions, while Llama does not. Our experiments (in Table 1 of global response), highlight that this property influences the results.
- The Llama descriptions often contain technical terms (e.g., in $\sim53$% of the reviewed descriptions), while GPT descriptions use more common terms (e.g., in $\sim94$% of the reviewed descriptions). To describe the airplane category, Llama mentioned terms like empennage, livery, composite material, whereas GPT descriptions contain terms, such as tires, metal plates etc. As (intuitively) technical terms were rarely encountered during the CLIP text encoder pre-training (e.g., the airplane class contained only $\sim8$% technical terms in the reviewed LAOIN-400m [1] captions, which can be a representative of CLIP alt-text), it might be harder for it to leverage those terms for classification.
- The Llama descriptions had a larger variety of sentence structures than GPT, combining technical information with descriptive elements, whereas the GPT used simpler sentence structure with less variety, mentioning important details in a more succinct and structured manner. As the CLIP text encoder is trained on web collection of images and their alt-text, it is logical that it responds better to tuning with simpler sentence structure.
To test the benefits of including the class names, we prepend the class names to each of the Llama descriptions and provide results in Table 1 of the global response.
We see that by simply prepending the class name the performance increased by $3.8$% for CIFAR-100 and the average result is also $0.8$% better than the results obtained by using the GPT descriptions (while the results obtained by using descriptions from both the LLMs are on average $2.8$% better than using class names alone).
**Generated descriptions might not be so effective.** In Table $3$ of the main manuscript, we see that the GPT descriptions perform well on datasets composed of naturally occurring images (CIFAR-10/100 and ImageNet), showing $4.0$% increase on CIFAR-100 and $2.1$% increase on the large-scale ImageNet, as compared to using only the class names.
On the out-of-distribution datasets like ImageNet-Adversarial (A) and ImageNet-Rendition (R), GPT descriptions show a degradation.
For ImageNet-A it is expected, as learning a visual classifier with detailed LLM descriptions of classes enhances attention to those class details, while ImageNet-A contains adversarial images collected such that details of wrong classes appear in the images to confuse the classifier. On ImageNet-R, composed of renditions of ImageNet classes, we can easily improve the performance by slightly changing the LLM prompts to generate class descriptions for text-only pre-training in our LaFTer. For example, we prompt the LLMs (Llama and GPT) to provide us with descriptions of different types of renditions of objects present in the ImageNet-R dataset (e.g., art, graffiti, embroidery). An example prompt to the LLM is:
- Describe what an _embroidery rendition_ of a {category} looks like.
By using these _targeted descriptions_ for the datasets, we gain substantial improvements. For GPT, we obtain an improvement of $4.9$% ($63.6$% $\to$ $68.5$%), and for Llama descriptions prepended with class names we gain $2.6$% improvement in accuracy ($63.5$% $\to$ $66.1$%). Generating dataset-specific descriptions opens interesting future work directions for cross-domain text-only tuning (as already indicated by our preliminary results on ImageNet-R). Since, the targeted responses help to mitigate distribution shifts from the text side, it is naturally reflected on the vision side due to the shared embedding space.
**Method requires a really good LLM (GPT or better).** In our paper, we perform experiments with the open-source Llama and in the rebuttal discuss the reasons why the results obtained by using Llama descriptions lag behind GPT, and how they can be improved (Table 1 of global response).
In an effort to improve the accessibility of LaFTer, we also perform experiments with recently open-sourced Llama-2 by META.
The results (Table 1 of global response) show that the descriptions generated by Llama-2 provide competitive results to the GPT descriptions and even outperform them on average (by $1.2$%) when prepended with the class names. These results show that the recently open-sourced LLMs can be readily used as an alternative to GPT.
**Diversity and quality of descriptions.** In Figure 1 of the global response, we plot the resulting accuracy after randomly sampling a certain amount of GPT descriptions per class. We observe that as we increase the amount of descriptions per class, the accuracy also increases, highlighting that the diversity of generated descriptions is indeed important. Furthermore, our experiments with generating _targeted descriptions_ for ImageNet-R (described above), demonstrate that the quality of responses from the LLM has a strong influence on the eventual classification performance.
**Imposing constraints on LLMs for visually meaningful texts.**
We expect that our prompts to the LLMs would impose conditions on the responses to produce visually meaningful texts. Two example prompts to the LLM are:
- Describe what a {category} looks like?
- How can you identify a {category}?
These prompts instruct the LLM to produce _category descriptions_ in a visually meaningful way. Please refer to the Supplementary Section 3.1, for a complete list of prompts.
[1] Schuhmann et al. LAION-400m, NeurIPS 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! Below are my comments:
1. The analysis of the generated texts from GPT and Llama seems interesting. Although I am not sure if you have a clear definition of "technical" and "common" terms when you calculate their frequency, I get the point. Because the downstream model is using CLIP, the generated texts need to be more similar to what it was trained with, otherwise, the model cannot understand it (e.g., a lot of "technical terms"). This makes sense and aligns with my hypothesis that the Llama generated texts were not that effective. These analyses are valuable to anyone who wants to use the proposed method so I recommend authors add them to the final revision. Also on this point, one thing you could try is to enforce/encourage the Llama to only generate texts using "common terms", by using a prompt (few-shot instruction) or a constrained vocabulary (if you have a clear definition of the "technical" terms).
2. Results with LLama2 and diversity ablation. I appreciate these results and they should be added to the final version. It is encouraging to see the method can work with more accessible open-sourced LLMs.
Overall I am satisfied with the response as it addresses my concern about the accessibility of the capable LLMs for the proposed method. I feel comfortable increasing my score and encourage the authors to include the analysis and results in the response to the final version.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thank you for the positive feedback on our response!
We would certainly include the analysis and the results from the response in the revised version of the paper.
Thanks again for your valuable suggestions!
We would appreciate it if the reviewer could indeed increase the score (currently it seems to remain unchanged). Thanks! | Rebuttal 1:
Rebuttal: We thank all the reviewers for their efforts to review our paper and for
providing insightful feedback.
We are happy to see that they found our work: **novel** `(YPJ1, zXQC, bBjN, o7C2)`,
**interesting** `(YPJ1, o7C2)` and **theoretically sound** `(bBjN)`.
Furthermore, we also thank them for highlighting that our work is **extensively evaluated**
on different benchmarks and shows **strong empirical results** `(YPJ1, bBjN, zXQC, o7C2)`,
contains **appropriate ablation studies** showing many interesting patterns and contributions of
individual components of the approach `(bBjN, zXQC)`, and also for finding our work
**easy to read and understand** `(bBjN, zXQC)`.
In the attached PDF, we present more insights to gain a further understanding of our LaFTer. We refer to this PDF throughout the response (to each reviewer) as **`global response`**.
Pdf: /pdf/fc1aba4544df226fe1d9c8dcd42ccb699f534dc6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization | Accept (poster) | Summary: The paper investigates the challenge of embedding multiple continuous attractor manifolds within a single RNN, with a focus on hippocampal place cells. The issues arise due to the presence of discrete steady states, visualized as minima on an abstract energy landscape, which disrupt the continuity of network activity patterns. This disruption prompts systematic drift of population activity patterns towards these discrete states, resulting in degraded memory over time. Past studies have considered the stabilizing influence of external stimuli; however, solutions in their absence remain unclear. The authors address this issue by modifying the synaptic weight to flatten the energy landscape, showcasing through simulations how this significantly stabilizes the activity pattern.
Strengths: o The methodology and problem formulation are clearly articulated.
o Simulations indicate that weight modification significantly improves the stability of activity patterns.
o The authors provide the code in the appendix.
Weaknesses: o While the manuscript is generally well-written, some areas could benefit from improvement. A diagrammatic illustration could help elucidate the issue of discretized states in the context of the energy landscape. Further details of the simulations (e.g., duration and specific inputs to the network) need to be provided. The appendix, which helps clarify the methodology, should be referred to more often in the Results section.
o Strengthening the paper could involve additional simulations and/or discussions (see questions below).
o There is a minor typo: "emfbedded map" should be "embedded map."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: o How applicable is your approach if connection weights are not symmetrical?
o How robust is the pattern stability if the modified weights are slightly perturbed?
o Can your approach provide biological insights into the structure of weight modification when new maps are added?
o Does the resulting weight (after the modification) violate Dale’s law?
o While it's evident that an energy landscape embedded with minima leads to activity patterns settling into one of them, thereby degrading the network’s ability to sustain persistent memory, does a flat energy landscape guarantee pattern stability?
o Here are some additional related works that could be included in the discussion [1-2].
o References:
[1] Genkin and Engel, Nature Machine Intelligence, 2020: https://www.nature.com/articles/s42256-020-00242-6
[2] Whittington et al., Cell, 2020:
https://www.sciencedirect.com/science/article/pii/S009286742031388X
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have acknowledged certain limitations, such as the unclear biological plausibility of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >While the manuscript is generally well-written, some areas could benefit from improvement. A diagrammatic illustration could help elucidate the issue of discretized states in the context of the energy landscape.
We agree that it will be helpful to include a schematic illustration to clarify the concept of discrete minima in the energy landscape. Since the final version can include one additional page, we plan to include such illustration if accepted. We thank the reviewer for suggesting this.
>Further details of the simulations (e.g., duration and specific inputs to the network) need to be provided.
The Supplementary text includes full details of the network dynamics, parameters, the numerical integration scheme used to simulate the dynamics, the optimization scheme, and quantifications. We will be grateful if the reviewer would let us know in case something specific seems to be missing. Note that there are no training inputs in our study.
>The appendix, which helps clarify the methodology, should be referred to more often in the Results section.
We thank the reviewer for this comment and now refer to the Supplementary Material more frequently in the Results section.
>Strengthening the paper could involve additional simulations and/or discussions (see questions below).
See responses below.
>There is a minor typo: "emfbedded map" should be "embedded map."
Fixed, thanks!
>How applicable is your approach if connection weights are not symmetrical?
Our approach is based on the existence of a Lyapunov function [9], and this relies on symmetric connections. All existing models for embedding multiple attractors in CA3 adhere to this rule. Furthermore, one of the key values of our work, in our view, is that it provides a proof of principle for the existence of a more stable version for a discrete set of continuous attractors than previously expected, which was assumed to be inevitably unstable. In future work, it will be highly interesting to seek biologically plausible learning rules, and to relax the assumption of symmetric connectivity.
>How robust is the pattern stability if the modified weights are slightly perturbed?
One of the main criticisms of continuous attractor networks is that they require fine tuning of the weights: small perturbations to the weights destroy the continuity of the attractor. This is also why the embedding of multiple maps, using the standard prescription, leads to a wrinkled energy landscape. Considering this, it is a priori expected that the recovery of a flat energy landscape would rely on weights that require fine tuning. One of the interesting features of our results is that the required correction weights are small (see Fig. 4b). A hypothetical learning rule that is based on stability of the attractor states would thus need to explore only small local perturbations to the initial weights, whereas the initial weights could be learned based on a simple Hebbian rule. Even though the correction weights require fine tuning along some dimensions, note that out of an infinite space of potential weight modifications only a specific set (least-squares) was chosen, leaving many unexplored solutions to the energy flattening problem which may relax the fine-tuning requirement to some extent.
>Can your approach provide biological insights into the structure of weight modification when new maps are added?
See above response. The main correction to the weights is the one arising from the naive embedding scheme. The modifications that we identify in our study are rather subtle. Therefore, attempts to relate plasticity under exposure to new maps to theoretical constructs should first focus on the predictions of the naive scheme.
>Does the resulting weight (after the modification) violate Dale’s law?
Like many theoretical models of attractor dynamics in the hippocampus (and elsewhere in the brain) we abstract away the distinction between excitatory and inhibitory connections. One imagines that inhibitory connections are realized via interneurons.
The unmodified synaptic connections in our model can be thought of as consisting of all-to-all inhibitory connections (mediated by interneurons), and excitatory connections which are specific to neuron pairs that have similar tuning in any one of the maps. Therefore, it is easy to reformulate the model in accordance with Dale’s law.
Since in our model each neuron forms ~100 excitatory connections associated with each map, almost all neurons end up being connected to each other with an excitatory connection when the number of embedded maps exceed 6 (since N=600). The small modifications to the weights that are required to flatten the energy landscape could thus be implemented as a subtle increase (or decrease) in the strength of the excitatory weights. Alternatively, the weight modifications could be applied to both excitatory and inhibitory synapses. It will be interesting to flesh out these ideas in a follow-up work.
>While it's evident that an energy landscape embedded with minima leads to activity patterns settling into one of them, thereby degrading the network’s ability to sustain persistent memory, does a flat energy landscape guarantee pattern stability?
Yes. The existence of a Lyapunov function guarantees that dynamics will settle on local minima of the function. When minima form a manifold (a continuous set of states, all sharing the same energy), stability of each state along this manifold is guaranteed. The property of this Lyapunov function is that whenever dI/dt≠0 → dE/dt<0, and thus a flat energy landscape guarantees pattern stability.
>Here are some additional related works that could be included in the discussion [1-2].
References:
[1] Genkin and Engel, Nature Machine Intelligence, 2020: https://www.nature.com/articles/s42256-020-00242-6
[2] Whittington et al., Cell, 2020: https://www.sciencedirect.com/science/article/pii/S009286742031388X
Thanks, we will incorporate these references.
---
Rebuttal Comment 1.1:
Title: Response acknowledged
Comment: Thank you for your thorough response and additions to the paper. I am going to increase my score. | Summary: The paper studies the storage of multiple continuous attractors in a recurrent neural network. Specifically, the authors tackle the interference between attractors and its effect on activity bump drift. By using a perturbative approach, they compute a correction to the connectivity that reduces the drift dramatically.
Continuous attractors (e.g., ring model) are important models in neuroscience, and understanding them is an important task. Multiple attractors are relevant, for example, in the CA3 region of the hippocampus, where remapping of place cells in different environments is common. Nevertheless, a naïve connectivity that is a superposition of several ring-connectivities results in only approximate continuous attractors. The result is a few stable points in each attractor, to which dynamics converge.
The authors use the perspective of an energy function (Lyapunov), and examine how the interference renders this function non-flat. They then calculate the perturbation to leading order, and solve for a change in connectivity that will flatten the energy. Furthermore, using gradient descent, they are able to achieve even greater precision.
Strengths: Continuous attractors are a fundamental building block in the study of recurrent neural networks in neuroscience contexts. There are relatively few studies tackling multiple such attractors. The method to reduce interference is novel.
Weaknesses: First, as the authors note, the resulting connectivity is extremely fine-tuned. This is a known problem with continuous attractors that is not addressed here.
Second, the problem and solution are highly related to similar problems in discrete attractors. Interference in Hopfield networks was tackled using pseudo inverse rules, either approximated online or as a global formula. There is no discussion of the relation to these works. The SVM approach of Battista and Monasson (Ref 5) is perhaps a similar example in continuous attractors.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: L146 energy IS quadratic
L178 “vast majority” the sentence isn’t very clear. Was the intention “close to a fixed point” or something similar?
L146 – I didn’t fully understand the argument on why the first order vanishes.
Can you say something about higher dimensions? Place cells are relevant in 2D. The results of [5,23] suggest differences when dimension increases.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >First, as the authors note, the resulting connectivity is extremely fine-tuned. This is a known problem with continuous attractors that is not addressed here.
We agree (please see also our 7'th response to reviewer MLg2, regarding fine-tuning).
>Second, the problem and solution are highly related to similar problems in discrete attractors. Interference in Hopfield networks was tackled using pseudo inverse rules, either approximated online or as a global formula. There is no discussion of the relation to these works. The SVM approach of Battista and Monasson (Ref 5) is perhaps a similar example in continuous attractors.
The potential relation to the works on pseudo-inverse based variants of the Hopfield network architecture is interesting, and we thank the reviewer for bringing this up. While it will be interesting to consider this relation in depth, we would like to point out two potential differences from the scheme explored in our work: First, the goal of the pseudo-inverse rule is to precisely embed a set of prescribed patterns as attractor states of the dynamics, whereas we are willing to accept distortions in the bump states and only care about (i) keeping them localized and (ii) precisely equalizing their energy. We suspect that an attempt to embed the idealized bumps as perfect steady states (using perhaps a generalization of the pseudo-inverse rule) would prove more difficult than our goal, yet this is an interesting question and our intuition on this is only an initial thought. The second difference is that the key benefit of the pseudo-inverse method is in increasing the capacity of the network. There are some conceptual issues in relating a definition of capacity in our problem to the definition in the discrete and binary case, but if we judge the capacity by the number of maps at which delocalized states start to appear it seems that our approach does not strongly affect the capacity (Fig. 5c), whereas its main outcome is in the flattening of the energy landscape.
We now briefly mention in the Discussion the potential interest in exploring a generalization of the pseudo-inverse rules, as a means of enhancing the stability of states along simultaneously embedded continuous attractors.
>L146 energy IS quadratic
Fixed, thanks.
>L178 “vast majority” the sentence isn’t very clear. Was the intention “close to a fixed point” or something similar?
Yes, this was the intention, and we thank the reviewer for pointing out that the phrasing was unclear.
We now revised it by: “...indicating that the convergence to a steady state was nearly complete (Fig. 3a).”
>L146 – I didn’t fully understand the argument on why the first order vanishes.
We now revised the sentence in lines 146-148 as follows:
“The precise structure of these states affects the second and third terms on the right hand side of Eq. (2) via the deviation of the state from an idealized bump. This contribution to the energy modification is quadratic in the deformations, because the idealized bump states are minima of the unperturbed energy functional, and therefore the energy functional is locally quadratic near these minima.”
We hope that this helps clarify the argument. Please let us know if further clarification is required.
>Can you say something about higher dimensions? Place cells are relevant in 2D. The results of [5,23] suggest differences when dimension increases.
Ref. [23] has shown that unlike in 1D, activity bumps can bypass an energy barrier in 2D when a constant external input (or force) is applied. Intuitively, this happens since in 1D there is only a single possible direction for the bump’s motion along the applied force while in 2D there may be multiple directions (each with an overall smaller force), in which the bump could bypass the energy barrier. The question of bypassing energy barriers given external inputs is, however, decoupled from the question of systematic drifts that we address here, which are independent of external inputs. Comparing, perhaps, the typical magnitude of these systematic drifts in a naive 1D vs 2D network is indeed interesting and will be explored in future work. Nevertheless, the proof of principle provided here in 1D should be easily extended to 2D, and so it is expected that emerging systematic drifts in a naive 2D network will be attenuated using the methodology we demonstrated in 1D.
There are N true steady states in a continuous attractor formed by N neurons, independently from a 1D or 2D organization. In order to produce an approximate continuous attractor, these states should densely tile the represented space, and this implies a different scaling with the size of the environment in 2D (quadratic) vs 1D (linear). Once the number of neurons is sufficiently large to allow for the single-map attractor to be nearly continuous, we do not see any conceptual difference in the problem of flattening of the energy landscapes for simultaneously embedded attractors between 1D and 2D. In both cases, the number of equations that need to be solved scales as N*L, whereas the number of weights scales as N^2. While the flattening of the energy landscape is expected to behave very similarly, we do not know how the number of true minima of the energy landscape (which is never precisely flat, even in our scheme) will behave. We did not quantify this quantity, as our main focus was on stability, or in other words, on reducing the systematic drifts to minimum. This latter question is more directly related to the quantifications in Ref. [5], where the key interest was in the number of true attractors, and not on the speed of convergence when starting at intermediate states. Relating our results, in a rate model network, to those of Ref. [5] (in a binary network) is interesting, as mentioned in the Discussion.
Please see also our first response to reviewer m6Fs.
---
Rebuttal Comment 1.1:
Title: post rebuttal
Comment: Thank you for the replies and clarifications.
Regarding the approach of ensuring a number of patterns are fixed points, and the relation to continuous attractors, this reference might also be relevant:
Darshan, R., & Rivkind, A. (2022). Learning to represent continuous variables in heterogeneous neural networks. Cell Reports, 39(1).
I'm maintaining my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for the suggestion, we agree that this reference is relevant in this context and will include it in the final version. | Summary: A new method is proposed to allow the simultaneous embedding of multiple attractors in an RNN through minimization of the energy function corresponding to the dynamics.
Two different methods to achieve this are considered, the first one based on the linearized energy function and the second on constrained optimization. The second method optimizes the weights for a flat energy landscape with the constraint that there is no change in the center of mass of the bumps.
Strengths: The aim to flatten the energy landscape of an attractor network with multiple attractors embedded in it is novel.
The main aim and the methods are clearly described.
The discussed problem of detrimental inference is definitely very important for theoretical neuroscience.
Weaknesses:
There is a lack of assurance that the modified network actually has the equalized energy function maintains the bumps as minima of the energy landscape.
The procedure also doesn't necessarily contribute to a flat energy landscape, it just ensures that all the evaluated states have the same energy value.
A more thorough way to enforce flattening energy functions for RNNs is described in: Noorman, Marcella, et al. "Accurate angular integration with only a handful of neurons." bioRxiv (2022): 2022-05.
The time to implement the algorithm is also really long 72 hours for $L=60$ and does not seems to be practical.
The reason to use the bump score is not fully justified.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 174: To analyse the modified network, one can use the linearized network or Lyapunov exponents to identify stability. Try to find the equilibria of the modified network and assess the stability of these points. Do the equilibrium states have a Lyapunov exponent close to 0? (This is expected from the ring attractor structure.)
CANNs can implement path integration. Is it possible to do that reliably in the proposed network for the different embedded maps?
However, the proposed idealized bump method to do this does not guarantee that the acquired energy landscape is truly flat. Can you plot the energy landscape for values in between bump positions?
SI 111: Do you mean that the function $f$ is defined \emph{in terms of} the weighted averaged position of the rates? The definition that follows is different.
132-133: Could you further explain how these steps follow from each other? The gradient of $I^{k,l}$ in S24 is not dependent on the constraint, why couldn't the derivative of it with respect to $M_{ij}$ change in another direction?
190: Could you further assess the stability of the network by introducing small displacements in the activity?
What are the Lyapunov exponents of the network? Do they correspond to a continuous attractor?
194: Could a different definition of the location lead to a different conclusion from the analysis?
The actual attractor states are not the idealized attractors any more necessarily.
Therefore, the mean square change defined like this might be biased.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations:
They are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >There is ... landscape.
Existence of the Lyapunov function rigorously guarantees that network dynamics will converge to stationary states which are local minima of the energy (lines 96-97). We have precisely characterized these minima, and carefully verified our numerical scheme by checking that the energy function is locally quadratic near these minima. Nevertheless, we agree that there is value also in checking independently that convergence takes place, as additional verification of the numerical scheme. Please see Supp Figs. 1 and 2b.
>The procedure ... value.
This is true only for the first scheme in which we evaluate the energy of idealized bump states. Next, we find the true minima of the energy, constrained on the center of mass of the bump thereby precisely evaluating the energy landscape as a function of bump position. Our optimization scheme then aims to flatten this landscape, and we succeed in doing so as shown in Fig. 4c and Supp Fig. 3b.
>A more ... (2022):
Noorman et al aimed to generate a *single* continuous ring attractor with a small number of neurons. Their approach, like ours, is based on the flattening of a Lyapunov energy landscape. Their work treats a very specific form of connectivity with cosine weights that enables the derivation of some results analytically, but is difficult to generalize to other forms of connectivity. In this sense, our approach is more general. In fact, we successfully used our approach to address the problem of Noorman et al using other forms of initial connectivity. In the *multi-map* problem addressed in our submission, the random permutations yield highly irregular weight profiles, and the problem of flattening the landscape is considerably more difficult.
>The time ... practical.
Our focus is on asking a basic theoretical question: whether it is possible to embed multiple flat continuous attractors in a single network? We did not put effort into optimizing our code or running it on state-of-the-art hardware and it is likely that there are more efficient ways to do so. For example, much of the ~72 hours mentioned above is due to our lack of an automated pipeline for gathering each iteration results and generating the next one. The key value of our work is in showing that the flattening is possible, and not in the efficiency of the algorithm used to obtain this result.
>The reason ... justified.
In the literature on Hopfield based models, the retrieval of a memory is obtained by computing the intuitive overlap measure between the memory pattern and the network state. This is equivalent to the computation of position using our bump score. Previous works that studied multiple embedded maps have used this measure as well [32,23,1]. For completeness, we also used the population vector to measure the position of localized activity. Both measures have shown a dramatic improvement in the network drift (Supp Fig. 4). We now clarify this choice in the SM.
>174: ... structure.)
Since rate networks with symmetric weights are guaranteed to settle on stationary steady states, it’s sufficient to examine the Jacobian of the dynamics to assess stability. If the attractor is continuous, one expects to observe an eigenvalue close to zero. We didn’t check this explicitly, but the slow dynamics over a continuum of near-steady states and the near-flatness of the energy function (Figs. 4c and 5a,d,e, and Supp Figs. 1 and 3b) implies so.
>CANNs ... maps?
It has been argued that path integration within the hippocampus would be difficult, because the synaptic connectivity required to do so would need to be tailored for each embedded map. Thus, it was suggested that path integration occurs elsewhere - perhaps in the entorhinal cortex. This idea was recently examined in Ref. [1], demonstrating that path integration can reliably be implemented despite the interference between the maps. The flattening procedure discussed in our work is expected to substantially improve the accuracy of this computation.
>However, ... positions?
As the reviewer notes, Fig. 2a,b for 10 maps only shows an approximation to the energy landscape, because the idealized bump states are not true minima of the energy. We later evaluate the precise energy landscape by finding states that minimize the Lyapunov function under a constraint on their center of mass. This is done at a dense set of positions (Fig. 4a and Supp Fig. 3b), with single neuron resolution. Even the classical ring attractor composed of N neurons is not precisely continuous, but has N precisely steady states. This is an issue of practical significance only when the number of neurons is small (Noorman et al).
>SI 111: ... different.
Yes, fixed.
>132-133: ... direction?
I^kl in Eq. S24 is defined as the state that minimizes the energy under the constraint. This is where the dependence on the constraint is coming from. Any modification of I^kl due to changes in M must remain in the subspace in which the constraint is obeyed. We modified the SM to clarify this point. In addition, note that we numerically verified that the second term in Eq. S24 vanishes (Supp Fig. 2b).
>190: ... attractor?
The steady states that we identify are minima of the energy under the constraint on the center of mass, and therefore the manifold of states that we identify is stable. The stability of the approximate attractor is evident in Fig. 5a,d,e. See above response regarding the Lyapunov exponent.
>194: ... biased.
This is true for our idealized bump scheme (Fig. 3c), where the mean squared change (MSC) is compared between the converged and the initialized idealized bump states. However, in Fig. 5 we quantify the stability of states that are true minima of the energy under a constraint on the position of the bump: we start from these deformed bump states and follow their dynamics (SM lines 84-85). Since the MSC is compared between deformed converged states of consecutive iterations there is no bias in relation to idealized bumps.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their replies and clarifications.
About the flatness of the energy landscape
When do you consider the number of neurons to be small? I agree that truly continuous attractors are only achieved when the number of neurons is infinite, but that would imply that anything smaller than that is a "small" number of neurons?
Is there a different sense in which you understand "small" for the "practical significance" you mention?
There seems to be a misunderstanding about my reference to the Lyapunov exponent (https://en.wikipedia.org/wiki/Lyapunov_exponent). I understand Fig 5 to contain information about the Lyapunov energy, however, I think it would benefit the analysis and would make the claims stronger about stability. To me it is unclear how the measure of stability in the paper relates to a resolution that is below the spatial resolution of one neuron (which should be considered for a continuous attractor).
I maintain my score.
---
Reply to Comment 1.1.1:
Comment: >About the flatness of the energy landscape When do you consider the number of neurons to be small? I agree that truly continuous attractors are only achieved when the number of neurons is infinite, but that would imply that anything smaller than that is a "small" number of neurons? Is there a different sense in which you understand "small" for the "practical significance" you mention?
There are several ways in which we understand "small" from a practical standpoint. The first is simple: if there are, say, a thousand neurons participating in a ring attractor with a single map, the graininess of the energy landscape is 2pi/1000. In other words, a stable representation can be maintained at the resolution of a full circle divided by the number of neurons. Whether or not this is small in a practical sense depends on the context, but it is no coincidence that the question addressed in the work of Noorman et al was brought up only recently, with the discovery of a ring attractor with a few dozens of neurons. In CA3, one can estimate that tens of thousands of neurons participate in the representation of each square meter map. Note, in addition, that the granularity arising from the finite number of neurons is small, within the parameters that we work with, compared to the tuning curve of each neuron.
Second, the drifts that we address in our work, which arise from the embedding of multiple maps, are much larger than the single neuron resolution - see Figure 3d. The single neuron resolution seems to us like a natural scale for our attempts to flatten the energy landscape, since this is the graininess which is expected without further manipulations even if only a single map is embedded in the connectivity.
In principle, however, we could have attempted to equalize the energy over a set of positions which is more dense than the single neuron resolution. Our methodology can be applied to achieve this goal even in the single map case - in similarity to the results of Noorman et al, but using an approach which is in some ways more general. We were able to do so for various forms of the single-map connectivity, and were even able to obtain an exquisitely stable representation of positions over a continuum of angles in a network consisting of only three neurons. We did not include these results in our submission because the focus of the present work is on the consequences arising from the embedding of multiple maps.
Third, in a realistic neural network one expects to have two types of noise: frozen noise in the connectivity, which in our context arises from the embedding of multiple maps, and dynamic noise that arises from the fact that neural activity is dynamically stochastic. Dynamic noise causes diffusive random motion in the position of the bump, which accumulates over time. The characteristic magnitude of the diffusive motion over a short time interval Dt scales as the square root of Dt/N, where N is number of neurons, whereas the graininess arising from the discrete number of neurons scales as one over N. In addition, the energy barriers reduce with N.. Consequently, random diffusive motion dominates the motion over short time scales, and the systematic drifts associated with the discrete number of neurons are completely washed out by the random diffusive motion when N is sufficiently large. With biophysically reasonable parameters, and when N is in the order of several hundred or more, the statistics of random motion in a single ring attractor is practically indistinguishable from the statistics predicted analytically using the continuum limit, while completely ignoring the graininess that arises from the finite number of neurons (see, e.g., https://doi.org/10.1073/pnas.1117386109).
>There seems to be a misunderstanding about my reference to the Lyapunov exponent (https://en.wikipedia.org/wiki/Lyapunov_exponent). I understand Fig 5 to contain information about the Lyapunov energy, however, I think it would benefit the analysis and would make the claims stronger about stability. To me it is unclear how the measure of stability in the paper relates to a resolution that is below the spatial resolution of one neuron (which should be considered for a continuous attractor).
Lyapunov exponents are primarily used to identify chaotic dynamics. In our case, there is no question that the system has stable attractors, and it cannot have a positive Lyapunov exponent. You are correct, that a system with a semi-stable continuous attractor should have a vanishing Lyapunov exponent. What we pointed out in our previous response is that in a system with stable attractors, a somewhat simpler signature for the continuity is that the Jacobian has a vanishing eigenvalue. This has indeed been used previously in the literature on attractor neural networks as a way to quantify the continuity of the attractor, and we could add such an analysis to the final version. | Summary: This work tackles the problem of interference between continuous attractors when they are held in a single RNN. The authors adopted a Lyapunov function as a depiction of energy of the network and tried to flatten the energy landscape of attractors by adding a modulation term to the original connection matrix. The modulation term was derived by two methods respectively: a first-order approximation of the energy function and a constrained iterative gradient-based method. They showed that by constraining bumps at their initial position, the gradient descent achieved a better result.
Strengths: The work achieved the goal of encoding multiple continuous attractors into a single recurrent network.
Weaknesses: 1. The recent experimental data actually showed that in the remapping of cognitive maps in hippocampus, place cells encoding different maps actually have little overlap, i.e., the hippocampus recruits different groups of neurons to form different continuous attractors. In other words, the interference between multiple continuous attractors is only a mathematical problem, not a biological problem. This limits the contribution of this study to neuroscience.
2. There are several issues in this study whose biological plausibility are not justified, the energy function (note the real neuronal connections are not symmetry), the modification of synapse strengths based on the attractors the network has stored, the gradient-based learning method. Overall, the insight of this study to neuroscience is rather limited.
3. Some important references are missed, such as the work of Misha Tsodyks et al. on stroring multiple continuous attractors (PLoS Computational Biology?)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Since the learning method is used to determine M, why the authors use W=\sum_i J_i+M and learn the modification term, why not learn W from the first space?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >The recent experimental data actually showed that in the remapping of cognitive maps in hippocampus, place cells encoding different maps actually have little overlap, i.e., the hippocampus recruits different groups of neurons to form different continuous attractors. In other words, the interference between multiple continuous attractors is only a mathematical problem, not a biological problem. This limits the contribution of this study to neuroscience.
Remapping of CA3 (and CA1) place fields in distinct environments (“global remapping”) has been documented in a large number of studies (see, for example, the following two review articles:
https://doi.org/10.1016/j.tins.2008.06.008, https://doi.org/10.3389/fnbeh.2017.00253).
Typically, in a given small environment, only a subset of CA3 place cells are active. Therefore, some cells will be active only in one environment and not in another. However, in any two environments, there is a significant fraction of cells that express firing fields under both conditions. The hallmark of global hippocampal remapping is that the spatial relationship between these fields is unrelated in the different environments.
As an example, see:
https://doi.org/10.1038/nature05601
Figure 1 shows firing rate maps of a few CA3 cells under global remapping, and Supporting Figure 2 (panels A and B) shows rate maps of many additional cells.
We are therefore not sure why the reviewer comments that the hippocampus recruits distinct, non-overlapping groups of neurons to represent different environments. We may have misunderstood the comment of the reviewer, and therefore we kindly ask that the reviewer will reexamine this claim.
>There are several issues in this study whose biological plausibility are not justified, the energy function (note the real neuronal connections are not symmetry), the modification of synapse strengths based on the attractors the network has stored, the gradient-based learning method. Overall, the insight of this study to neuroscience is rather limited.
Progress in the theoretical understanding of the brain has benefited from various approaches and styles of research: among these approaches, those based on mathematical abstraction have often proved to be useful, because this facilitates the identification of general principles. For example, the Hopfield model of associative memory was formulated in terms of binary units, with symmetric connectivity, and using many other gross idealizations of the biological reality. Despite these choices – and, in fact, perhaps because of them, the model had tremendous influence on the thinking about short-term and long-term memory, and the relation between neural network organization and function. Numerous later studies expanded on this model. Some examined how the same principles can be realized with more biological realism, while others used the model in unexpected contexts not foreseen originally.
Likewise, most models of head-direction, grid-cell, and hippocampal networks to date assume symmetric connectivity, as in our study. This is not meant to suggest that the question of non-symmetric connectivity is unimportant: however, by providing a proof or principle that multiple continuous attractors can be embedded without compromising the stability (as presumed previously) we provide a new insight – even if we do so at this stage only for symmetric connectivity. It is likely that our work will motivate future studies, in which some of the limitations mentioned by the reviewer (and discussed in the section of our manuscript on limitations) will be confronted.
Furthermore, the value of rigorous mathematical theory is often realized in unexpected contexts. The ring attractor model was initially proposed for orientation selectivity on V1, yet it has motivated the formulation of theoretical models of head direction cells, and has been recently extremely valuable in understanding the physiology and structure of the fly’s central complex, where surprisingly tight correspondence has been observed between network organization (and dynamics) and the seemingly idealized mathematical theories that were developed in previous studies for head-direction cells.
We therefore believe that our study provides a significant development in the theory of attractor networks in the brain, and that it fits well into the scope of NeurIPS.
>Some important references are missed, such as the work of Misha Tsodyks et al. on stroring multiple continuous attractors (PLoS Computational Biology?)
We were not sure which paper was meant here, and wonder whether it is this one:
https://doi.org/10.1371/journal.pcbi.1000869.
This article proposes a network architecture in which two or more maps are embedded in a single network, in the case where the patterns of activity associated with the maps are correlated. The paper does not address the problem of achieving precise stability, yet it is thematically related to our work in a broad sense because of the study of simultaneously embedded maps. We will be happy to cite it, and will be grateful if the reviewer could provide us with other concrete suggestions for additions to the reference list.
>Since the learning method is used to determine M, why the authors use W=\sum_i J_i+M and learn the modification term, why not learn W from the first space?
We first started from the established naive form of connectivity J, which was identified and used in previous studies for embedding multiple attractors. This initial connectivity can be viewed as arising from a Hebbian form of learning, as discussed in previous studies. We implemented a *perturbative* approach in which we started from this known approximate solution to the problem, where the energy landscape (as we show) is not precisely flat. The perturbative approach is key to our methodology (please see lines 137-152), and the weight corrections that are required to rescue the continuity of the attractor are indeed small (Fig. 4b).
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors insightful response. Most of my concerns are resolved.
Regarding to the remapping of place cells, the authors could see this work (https://www.pnas.org/doi/10.1073/pnas.1421056111), whose results exhibited that the overlap is minimal between the recruited place cells in different environments.
---
Reply to Comment 1.1.1:
Comment: Thanks for pointing to the manuscript of Alme et al (PNAS, 2014). This work is helpful because it quantifies the overlap more systematically than in previous studies. The recordings made by Alme et al were carried out mostly in novel environments and only for 15 minutes, and therefore it is possible that spatial maps were not yet fully established in the CA3 network (see also Leutgeb et al, 2004). Nevertheless, the recordings are informative on a qualitative level, and the conclusion is that overlap is small, but not minimal. In the context of our work the overlap observed by Alme et al is extensive and consequential, as we explain below.
The interpretation reached by Alme et al is that most CA3 place cells are active in ~7% of the environments and a smaller subset of cells, estimated in the paper to be 10%-20% of the total population, are active in a much larger fraction of the environments. Furthermore, the picture emerging from the paper is not of a strict division of cells into clusters, each representing a distinct map. The results are consistent with a picture in which the participation of each cell in any given environment is determined randomly and independently of the other cells.
In terms of the whole population, the above estimates suggest that the participation ratio (defined as the fraction of cells that are active in any given map) is somewhere between 10% and 20%. A similar conclusion can be reached by asking what is the probability that a cell will be active in a map B, given that it is active in map A (and is therefore already classified as a place cell). This information can be extracted from the histogram shown in Fig. 4A: the cells that had a field in one map or more participated, on average, in the representation of 2.75 rooms out of 11. Excluding the environment used to classify the cell as a place cell, the participation ratio is approximately no less than 1.75/10 = 0.175 ( (2.75-1)/(11-1)).
With a participation ratio of ~0.15, neurons expressing a bump state in one environment are expected to receive numerous synaptic inputs resulting from weights associated with any other map. These will add up in proportion to the number of embedded spatial maps. Furthermore, many studies indicate that the hippocampus encodes memories other than spatial maps, which are expected to contribute as well to the frozen noise. Overall, the effect of frozen noise associated with the embedding of multiple memories will depend on p, on L (the number of spatial maps, and other associated memories) and on N, the number of neurons participating in each map. Importantly, the flatness of each attractor state is inevitably compromised in the naive embedding scheme. The magnitude of the effect must increase in proportion to the number of embedded memories, and therefore it must become large for sufficiently large L.
For computational simplicity we implemented in our simulations a network architecture in which all cells participate in all maps. However, it would be straightforward to adapt the architecture to one with a participation ratio smaller than 1. We agree that it will be interesting in a follow-up study to examine (numerically and using analytical tools) the interplay between the participation ratio p, the number of embedded maps L, and the number of neurons participating in each map N. Note that small p implies low interference, which is beneficial since it reduces the frozen noise, but when keeping the total number of CA3 neurons fixed, it also implies a reduction in the number of neurons participating in each map, and in each bump state. This is expected to reduce the resilience of each attractor to the frozen noise.
We thank the reviewer for raising these questions. We will add a paragraph in the Discussion of the final version, in which we will discuss the fact that in reality, neurons in CA3 participate only in a subset of maps, and how this could be addressed in future studies. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading of our submission and for their insightful comments. Please see our point-by-point responses to each review. We will highly appreciate any additional comments or requests for clarification that may arise in response to our answers to the questions. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors present a new technique for embedding multiple attractor manifolds into RNNs. To do so, they first randomly choose a number of attractor manifolds, embed these into an RNN, and then make weight adjustments to smooth out the interference created by multiple manifolds. The authors propose two strategies for weight adjustment, which consider first- and then second-order interference effects. These adjustments are shown to iteratively improve several intuitive metrics.
Strengths: This paper addresses and interesting and well-defined problem. This is not my area of expertise, but the results seem quite general to understanding RNN function, and thus significant.
The paper is well written and the results clearly presented. The stability metrics are intuitive.
The presented solutions are strikingly effective!
Weaknesses: While the results are very strong, the paper only explores a single task (embedding 1D ring attractors). Do the results hold when moving to, say, two dimensions? How does the number of neurons in the network interact with task complexity?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: the phrase "quenched noise" is used a lot - can this be defined more clearly upon introduction? My idea of "quenched" is equivalent to "reduced", which does not seem to align with how this phrase is being used.
L15: typo: adjustment -> adjustments
L151: typo: "emfbedded"
Fig 1b top: only 9 red lines - need to expand axis limits?
Fig 2b top: it is interesting that modification - while flattening the energy landscape - also raises the overall energy of the system. It would be instructive if the authors could offer some intuition for why this is the case. Also, in Fig 4a it appears that the energy after second-order modification is actually less than the single map case - will this always be the case? How should I think about this result?
Figs 3-5: what are error bars? The captions say mean+/-SEM, but what are these statistics computed over? Are there multiple network instantiations for each value of L?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: One limitation that is already pointed out by the authors is that their approach is not a biologically plausible learning algorithm; however, I agree that this proof-of-principle is a solid first step and that investigating other mechanisms for weight updates is an interesting direction for future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >While the results are very strong ... task complexity?
In order to reduce the computational cost, we explored our schemes in 1D, but it is straightforward to extend our approach and implement it in 2D. Conceptually, we do not expect a qualitative difference when doing so. One notable difference between 1D and 2D environments is that the number of neurons required to achieve a good approximation to a continuous attractor, even for a single map, scales in proportion to the area in 2D, as opposed to length in 1D. But for a given number of neurons, there is no substantial difference between the two cases in terms of the complexity of the problem: the number of equations scales as N*L, and the number of parameters (synaptic weights) scales as N^2. Since the random permutations are completely unrelated to the spatial organization of the firing fields, quenched (frozen) noise is expected to behave similarly in the two cases.
Please see also our last response to reviewer fsQk.
>the phrase "quenched noise" ... being used.
The term 'quenched noise' is used in statistical physics to describe frozen disorder in many-particle systems (as opposed to noise and disorder arising from dynamic fluctuations). We defined this term in our context in the introduction (line 54): it describes a hard-wired (and thus also referred to as frozen) noise in the connectivity which is independent of time. This is in contrast to dynamical noise that affects instantaneous neural firing rates (see also Refs. [23,5]).
We will add the following clarification in the final version:
“Unlike stochastic dynamical noise which is expressed in instantaneous neural firing rates, time-independent quenched noise is hard-wired in the connectivity – it breaks the symmetry between the continuum of neural representations.”
>L15: typo: adjustment -> adjustments L151: typo: "emfbedded"
Both fixed.
>Fig 1b top: only 9 red lines - need to expand axis limits?
Yes! Fixed, thanks.
>Fig 2b top: it is interesting ... why this is the case.
The (flat) modified energy landscapes in Fig. 2b (top and bottom, orange traces) are not the true energy landscapes, because the energy is evaluated using idealized bumps and these are not true steady states of the multi-map network. These precisely flat energy traces are shown as sanity check, to demonstrate that the added weight modifications indeed produce the expected outcome of a flat energy landscape, when re-evaluating the energy landscape using idealized bumps. For this reason, the comparison between the absolute energy values of the pre-modified (blue) and modified (orange) landscapes in Fig. 2b is not fully justified.
Subsequently in the manuscript, we performed a precise evaluation of the energy landscape, by replacing the idealized bumps with steady states obtained through the constrained gradient optimization scheme. The appropriate figure to look at is therefore Supplementary Fig. 3b, where we show the precise energy landscape across multiple iterations. Here it is evident that the overall (mean) energy does increase, as suggested by the reviewer.
In our scheme, we do not seek to reach a prescribed value of the energy in each iteration, but only to equalize the energy across states at a value which is determined implicitly in the optimization scheme. This introduces a bit more freedom in the choice of the correction weights than if targeting for a prescribed energy. We suspect that the increase of the energy in our scheme is specific to the choice of the single-map connectivity that we worked with, and that other choices of this connectivity could elicit a decrease of the energy during optimization. We did not check this systematically, however, since uniform shifts of the energy landscape are inconsequential for the stability and dynamics of the bump states.
>Also, in Fig 4a ... about this result?
We did not show how the energy depends on the number of embedded maps. We realize now that Fig. 2a might be confusing in this respect: throughout the manuscript, we uniformly shifted the energy by a constant in order to allow for the landscapes to be shown on the same plot for a single map and for 10 maps (Fig. 2a). This constant was selected such that the mean energy of idealized bump states across all states and maps (without any weight modification) is zero: thus, the mean of the blue trace in the bottom panel of Fig. 2b is zero (as well as single map blue traces, Fig. 2a). Therefore, comparison of absolute mean energies in Fig. 2a between single map and 10 maps traces is not meaningful. This is now explicitly explained in the final version.
We can, however, compare the absolute mean energy value for landscapes with a varying number of maps before shifting and centering them around 0: when doing so, we find that as more maps are embedded in the connectivity, the absolute mean energy of the landscape decreases. To understand why this happens one should look at the expression for the energy (Eq. 1). The network parameters (Supplementary Material) were chosen such that the sum of rows/columns of the basic connectivity matrix is a constant value. Therefore, each additional map embedded in the connectivity will contribute, when averaging over a random permutation, a constant shift to the energy through the third term of Eq. 1. This constant is negative for our choice of the single-map connectivity matrix.
In summary, there is a systematic shift of the mean energy with addition of new maps that depends on the specific choice of the single-map connectivity matrix, and occurs even without any weight modifications. The weight modifications introduce additional shifts in the energy, but these are fairly subtle compared to the pre-modified dependence of the mean energy on the number of maps.
>Figs 3-5: ... value of L?
Yes, please see Supplementary Material lines 138-141. We added in the final version a reference to the Supplementary Material in the captions of Figs 3-5 (after mentioning SEM).
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response.
Regarding 1D vs 2D environements, this is likely a common question that will come up among readers. I'd suggest at least adding something to this effect, perhaps in the Discussion?
> One notable difference between 1D and 2D environments is that the number of neurons required to achieve a good approximation to a continuous attractor, even for a single map, scales in proportion to the area in 2D, as opposed to length in 1D. But for a given number of neurons, there is no substantial difference between the two cases in terms of the complexity of the problem: the number of equations scales as N*L, and the number of parameters (synaptic weights) scales as N^2. Since the random permutations are completely unrelated to the spatial organization of the firing fields, quenched (frozen) noise is expected to behave similarly in the two cases.
While I don't think it's strictly necessary to include a 2D example in the manuscript, I think the impact (and at least perceived generality) of this work would increase with said example (even a small one, even in the supplementary).
My other concerns/confusions have been adequately addressed.
---
Reply to Comment 1.1.1:
Comment: Thanks for the suggestion, we will mention and address this (2D) in the Discussion of the final version. | null | null | null | null | null | null |
Resilient Multiple Choice Learning: A learned scoring scheme with application to audio scene analysis | Accept (poster) | Summary: The paper tackles the problem of multiple-choice learning (MCL) in the regression setting with specific focus on the overconfidence problem and hypothesis collapse problem found in previous approaches for MCL where predictions from the heads corresponding to rare events are overestimated. The proposed rMCL model frames the problem as a multimodal conditional distribution estimation problem. The authors propose a new loss function that adds a new hypothesis scoring loss to the existing multi-target winner takes all (WTA) loss.
The proposed algorithm is compared against other baseline models on both a toy dataset, as well as a sound source localization (SSL) task. The experiments on the toy datasets visually show the effectiveness of the rMCL approach in tackling the overconfidence problem as compared to a method that does not use the additional loss term. The results on the SSL dataset show that the rMCL algorithm is practically applicable to problems with multimodal outputs with consistently better performance in the case of multiple sound sources as compared to the other algorithms.
Strengths: - The presented approach builds on top of existing literature and solves the overconfidence problem using an additional term to the existing loss function.
- The authors visually demonstrate the effectiveness of the approach using a simple toy dataset, and follow up with application to a real-world regression task (SSL).
Weaknesses: - The experiment on SSL is not very clear to me and might benefit from some additional details. Specifically, I am not clear on the interpretation of the EMD and oracle error. My understanding is that for each input sound snippet, the model predicts the angles of each source from each of the output heads. Does the oracle only count the best prediction out of all the source angles predicted, while the EMD accounts for the overall error?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See above
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have addressed limitations with the presented work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the remarks and the feedback on the paper.
The reviewer is indeed correct in that the EMD (Earth mover's distance or Wasserstein-1 metric [A]) considers all the hypotheses predicted and their associated scores. When it comes to the oracle metric, it evaluates only the best prediction for each target source (e.g., [11,20,29,24,22,18,6]). To be more detailed, we elaborate here-after on the SSL task, and the metrics related to multiple choice learning. We will clarify their presentation in the paper accordingly.
Given audio tracks recorded from a microphone array, the SSL task consists of predicting, at a given temporal rate, the positions of specific sound sources one is interested in. Sound sources can appear or disappear in the record, and their number can vary. As for the position, in those benchmarks, we are only interested in the angular position.
Because of the uncertain and multi-modal nature of the prediction task, it can be useful to cast it in a multiple choice learning or distribution learning framework. In such settings, the models output different plausible predictions or hypotheses for any given input. The difficulty that arises then is how to assess the quality of the multiple predictions of the model when only one realization of the ground-truth distribution is observed for any given input.
Whenever a single target is available, which corresponds to one source to localize in SSL, one possibility is to compute the error for the prediction closest to the ground truth, that is the Oracle metric: we get 0 oracle error if the ground-truth is in the pool of predictions. When multiple targets are present, the oracle metric averages over the best hypothesis for each target [6]. Heavily used in the MCL framework [11,20,29,24,22,18,6], this metric therefore informs about the mean quality of the best hypotheses predicted.
The other metric, the Earth Mover’s distance (EMD), computes the optimal transport cost [A] between the ground truth and predicted distribution, both being cast as a mixture of Diracs in our case. Indeed in those audio datasets, sound sources can be assumed to be point-wise, and the ground-truth distribution to predict can be cast as a uniform mixture of Diracs. This considers all the hypotheses, weighted by their normalized score, and allows for a more complete evaluation; it informs about the global consistency of the hypotheses predicted [22].
Both the Oracle and the EMD are equipped with an underlying distance adapted to the geometry of the problem. Since we are dealing with angles only, this underlying distance is angular distance here.
[A] Kantorovitch, L. (1958). On the translocation of masses. Management science, 5(1), 1-4. | Summary: The authors propose Resilient Multiple Choice Learning (rMCL), a modification of the Multiple Choice Learning (MCL) approach, for conditional distribution estimation in regression contexts where each input can have multiple target samples. While MCL is a straightforward strategy for multimodal density estimation, it uses the Winner-Takes-All (WTA) loss for various hypotheses. In regression situations, the prevailing MCL versions focus on combining the hypotheses, which may compromise the diversity of the predictions.
In contrast, rMCL employs a new learned scoring system, which is supported by a mathematical framework based on Voronoi tessellations of the output space. This approach allows for a probabilistic interpretation of the results. The authors tested rMCL using synthetic data and found it to be effective. They also applied it to the problem of sound source localization, demonstrating its practical utility and the relevance of its interpretation.
Strengths: Novel Approach: The proposed Resilient Multiple Choice Learning (rMCL) provides a fresh perspective to tackle the Multiple Choice Learning (MCL) overconfidence problem, especially in regression settings. This offers a new method for researchers and engineers to approach this issue.
Learned Scoring: rMCL is based on a learned scoring scheme that handles multi-target settings. This flexibility could allow rMCL to perform well across a variety of tasks and datasets.
Probabilistic Interpretation: The authors provide a probabilistic interpretation of the model, which could help in understanding the model's behavior, tuning its performance, or extending it to new applications.
Evaluation and Application: The paper demonstrates the practical utility of rMCL, especially in the context of sound source localization. It illustrates the resilience of the model in both synthetic and real-world datasets, showing that the method can handle real-world complexity.
Advantages Over Previous Methods: rMCL, applied to the Sound Source Localization (SSL) problem, seems to alleviate the issues related to imbalanced spatial positions and the source permutation problem. It does not require prior knowledge of the number of sources, which is a significant advantage in practical applications.
Weaknesses: The method heavily relies on the dependence on high-quality data: Like many machine learning models, the performance of rMCL might be significantly impacted by the quality and representativeness of the training data. Further evaluation might be required based on whether the model is affected in the presence of noise in the data and the quality of the labels.
The performance of the rMCL approach may not be suitable when one is approaching to scale the data wherein the size and complexity of the dataset would affect the nature of the machine learning task at hand especially when one considers the hypothesis space which is large.
It's not clear how rMCL would respond to different types of noise in the data or how robust it is to outliers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Based on the weakness it would be helpful for authors to provide comments on how the rMCL approach might work in case of noise in the data and whether there is any comparative analysis of WinnerTakesAll type approaches which take into consideration of the label data noise.
Despite the probabilistic interpretation offered by the rMCL approach, it might still be challenging to understand and explain the model's decision-making process, which could be a limitation in certain applications where interpretability is crucial. How does authors see the probabilistic interpretation aiding/modified to overcome the interpretability of the model learnt? It might be the case where the probabilistic interpretation may yield incorrect result with overfitting of the model.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors do state the limitations of the work clearly.
They review the performance of the model and conclude that it appears that the performance of Resilient Multiple Choice Learning (rMCL) is influenced by the number of hypotheses being considered. Particularly, attempting to predict an overly large number of hypotheses without prior knowledge can introduce errors into score predictions. In addition, using rMCL to address the overconfidence issue inherent in Winner-Takes-All (WTA) variants can slightly degrade the performance of the best hypothesis, though the reason for this is not currently known. Lastly, while ε-WTA seems to behave as expected with rMCL, the same is not observed for top-n-WTA, necessitating further study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their insightful comments.
> Based on the weakness it would be helpful for authors to provide comments on how the rMCL approach might work in case of noise in the data and whether there is any comparative analysis of WinnerTakesAll type approaches that take into consideration of the label data noise.
The reviewer is asking about the robustness of the model in the presence of label noise, which is a challenging problem. The label noise corresponds to the aleatoric uncertainty [8,4], which can not be reduced with more data (e.g., corresponding to measurement error).
To the best of our knowledge, there is no comparative analysis of Winner-Takes-All type approaches in the context of noise; we provide, in the next section, insights about the resilience of the proposed model in the presence of outliers in the data.
**Performance of rMCL in the presence of target outliers**
We thank the reviewer for their question which prompted some interesting investigation. We believe that rMCL, thanks to the partition into Voronoi tessellations and the scoring scheme, has good properties for handling outliers. To illustrate this, we propose the following experiment.
Let's consider a setting where we have outliers in the training dataset, for instance, the toy-use case presented in the paper, where for each training example sampled, the probability of getting an outlier is $p \ll 1$, e.g., modeled with a bivariate Cauchy distribution. Then, whenever an outlier is sampled, one hypothesis will be pushed towards it with its associated score heads updated. As the training goes on, some of the hypotheses will manage the outliers samples; let's name them the "outlier hypotheses". Thanks to the proposed hypothesis scoring heads, the model will also learn the probability that an outlier hypothesis is chosen for a given training sample. Provided that the outlier likelihood is $p\ll 1$, the scoring heads will therefore prevent outlier hypotheses output from deteriorating the quality of the predicted distribution by rMCL. In Fig.A an illustration of this phenomenon is proposed using a Cauchy distribution (we used $p=0.02$). We notice the above-explained phenomenon, where the so-called outlier hypotheses account for the outlier samples, while the other hypotheses lie in the square $[-1,1]^{2}$ representing the samples from the ground-truth distribution.
Provided that the probability of sampling an outlier $p$ is small enough and the outliers are far enough from the ground-truth distribution to predict, the proposed rMCL model is therefore potentially robust to outliers. In this case, some specific hypotheses, namely the outlier hypotheses, will be assigned to them, preventing the non-outliers hypothesis from being heavily affected. At inference time, it will indeed be possible to set to zero the very low-score hypotheses given an arbitrary threshold, so that the outlier hypotheses are not taken into account.
> "Despite the probabilistic interpretation offered by the rMCL approach, it might still be challenging to understand and explain the model's decision-making process, which could be a limitation in certain applications where interpretability is crucial. How do authors see the probabilistic interpretation aiding/modified to overcome the interpretability of the model learnt? It might be the case where the probabilistic interpretation may yield incorrect results with overfitting of the model."
The probabilistic interpretation allows us to state that the different hypotheses would be organized in an optimal partition of the output space forming a Voronoi tessellation, providing insights about the distribution to predict through the score heads. In an ideal case, this would lead to each hypothesis capturing a region of the distribution, and the scores being how likely this zone would activate in a given context. Of course, in a realistic setting, the hypothesis might not adhere to a meaningful region, but this can be controlled and evaluated provided we have additional annotations in our data.
Another asset of our approach is that the scoring function, similarly to a classifier, should provide an insight about the confidence of our model on each hypothesis. While the predicted probabilities can be wrong as stated by the reviewer, we can evaluate and calibrate the scoring function independently on a validation set [A,B], which should allow us to identify and alleviate such issues.
We hope that we understood the concerns of the reviewer correctly and that these considerations address them. We would be happy to further discuss these topics and clarify any potential misunderstanding.
[A] Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017, July). On calibration of modern neural networks. In International conference on machine learning (pp. 1321-1330). PMLR.
[B] Song, H., Diethe, T., Kull, M., & Flach, P. (2019, May). Distribution calibration for regression. In International Conference on Machine Learning (pp. 5897-5906). PMLR. | Summary: This paper proposes a technique for resilient Multiple Choice Learning (rMCL), which extends the vanilla Muliple Choice Learning (MCL) paradigm to conditional distributions for regression where multiple targets maybe sampled for each training input. It is known that MCL uses multiple scoring heads to score multiple hypothesis for a given input and suffers from the twin challenges of hypothesis collapse (where only a small subset of the possible prediction heads are trained well, as a result of the Winner Take All strategy) and overconfident predictions (where rare classes are overly represented). This work focuses principally on addressing the latter issue. A key feature of the method is the use of a learned scoring scheme based on Voronoi tessellations which lends itself to a probabilistic interpretation. Results are reported on the sound source localization task and on synthetic data and compared against some standard alternatives.
Strengths: The following are the key strengths of the proposed approach:
1. This is possibly the first work that extends MCL to a regression setting.
2. The proposed solution attempts to overcome the overconfidence problem of a standard MCL system by casting the MCL as a conditional distribution estimation technique while allowing for a probabilistic interpretation of the same.
3. Experiments show that the method, which is capable of assigning prediction probabilities to low density regions, is also interpretable.
4. The code for the proposed method has been made available.
Weaknesses: The following are some of the principal weaknesses of the proposed approach:
1. The proposed method does not deal with the hypothesis collapse problem of MCL methods.
2. The contrast with prior approaches is not well brought out in the Related Works section.
3. No performance is reported on more recent sound source localization datasets such as LOCATA or the DCASE 2019 dataset
4. Moreover, the proposed approach falls way short of competing methods on the Oracle metrics, on both the reported datasets. The authors present too much emphasis on just the multimodal EMD results without justifying why the drop in Oracle distance metric should be ignored.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Following are some of the questions/comments for the authors:
1. How does the current work compare against prior works in Multilabel Learning? [1, 2]
2. How do the methods perform on the Euclidean distance measure, compared with the Oracle Error?
3. This reviewer fails to see the value in ensembling the proposed approach with WTA (Table 3). Why not combine with IE or PIT then?
4. Please rephrase Line 111: "More precisely...."
References:
[1] Zhu, X., Li, J., Ren, J., Wang, J. and Wang, G., 2023. Dynamic ensemble learning for multi-label classification. Information Sciences, 623, pp.94-111.
[2] Kim, Y., Kim, J.M., Akata, Z. and Lee, J., 2022. Large loss matters in weakly supervised multi-label classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14156-14165).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes the limitations of the method have been well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and comments, as well as for the suggestions for extending the experimental results of the paper. We provide here a detailed answer to the raised concerns.
**The metrics interpretation**
> “The authors present too much emphasis on just the multimodal EMD results”
We insist on multimodal settings rather than unimodal ones because rMCL is suited for multimodal density estimation. The proposed approach is particularly relevant when the output conditional mean is not the best solution, which generally corresponds to multimodal distributions to predict.
> “without justifying why the drop in Oracle distance metric should be ignored.”
In our experiments, the focus was primarily on the EMD metric, as the Oracle does not address the issue of overconfidence. It indeed only considers the best hypotheses, without accounting for the global consistency of the prediction; if a low probability density zone is overestimated, this phenomenon will not be measured by the Oracle metric. This precision will be added to the paper.
> “How do the methods perform on the Euclidean distance measure, compared with the Oracle Error?”
As highlighted in L.257, the Oracle error is computed using an underlying distance $d$ adapted to the output geometry, e.g., the Euclidean distance in the toy example or the spherical distance in the SSL task. For those benchmarks, we are only interested in the angular instead of Cartesian positions, therefore computing an Euclidean distance is not representative of the task. This point will be clarified in the paper.
> “This reviewer fails to see the value in ensembling the proposed approach with WTA (Table 3). Why not combine with IE or PIT then?”
These choices are explained by the fact that our approach can be seen as an extension of the WTA training scheme. As such we wanted to evaluate how our approach would combine with commonly used WTA extensions (top-$n$ and $\varepsilon$). It is not straightforward to combine it with PIT or IE. Indeed PIT is not related to MCL and IE was constructed from single-hypothesis WTA models that are not amenable to our method.
**Performance on more SSL datasets**
Following the reviewer's suggestion, we conducted more experiments on SSL datasets: REAL [1] and DCASE 2019 [A], where the maximum overlapping events are three and two. As with the manuscript's tables, model evaluation occurred separately in unimodal and multimodal conditions. For REAL and DCASE19, we computed mean metric values after 2 and 3 training runs, respectively, presented in Tables A and B.
**Tables A & B reveal trends aligned with the paper's findings:**
- Consistent with Section 4.5's analysis, increasing the number of hypotheses improves the oracle, but also degrades slightly the EMD while still surpassing the PIT baseline in multimodal settings.
- The winner-takes-all approach and the IE variant, with one hypothesis and a single target update (see L.21-23 in Supplementary material), still outperform the other methods whenever a single source position is to be predicted.
- While vanilla WTA slightly outperforms our method on the Oracle metric with the same number of hypotheses, a significant gap remains in EMD metric when predicting multiple hypotheses (e.g., 5).
- rMCL shows consistent performance across datasets, where top-n and $\varepsilon$-WTA can show a very wide disparity.
The optimal number of hypotheses on DCASE19 is lower than in previous datasets, probably due to its less multimodal nature. We conducted various visualizations to compare baselines and the proposed approach, confirming the competitive performance on those two SSL datasets.
**The collapse problem**
In our audio experiments, neither the vanilla WTA nor the proposed rMCL model exhibited collapse. We confirmed this by analyzing histograms of winner hypothesis heads from trained models during testing, as illustrated by Fig. A's 20-hypothesis rMCL model trained on ANSYN. As mentioned in [13, p.8], we think collapse is in practice solved by the variability of the data samples and the training stochasticity. This is why we did not study the collapse problem, but we will include this discussion in the Supplementary.
**Comparison with prior work in Multi-label learning [A,B]**
We thank the reviewer for the suggested references, to be included in the Related Work. [A] approaches the topic of Multi-label learning from the ensemble learning perspective, while [B] addresses multi-label classification in images with missing labels, presenting a variant of multiple choice learning. Each hypothesis in this context would aim to predict a possible class present in the image while expecting diverse predictions [18,20,29]. These two approaches are, however, classification methods whereas our paper focuses specifically on MCL for regression tasks. Adapting such methods to multi-source regression is not straightforward.
**About the related work section**
The related work section will be enhanced, focusing on contrasting it with the paper's contribution.
- The link between uncertainty estimation and MCL will be made clearer through ensembling [20] (L.59-60 in the manuscript).
- The contrast with previous works in MCL [13,20,29,18,11,24,22,6] will be emphasized; ours tackles the overconfidence problem in regression settings without merging the hypotheses (e.g., [13,24,22]), by revisiting [29]. This results in a gain in diversity of the hypotheses and is suitable for extending the MCL probabilistic interpretation proposed in [24].
As suggested, the sentence in L.111 will be rephrased.
[A] Zhu, X., Li, J., Ren, J., Wang, J., & Wang, G. (2023). Dynamic ensemble learning for multi-label classification. Information Sciences, 623, pp. 94-111.
[B] Kim, Y., Kim, J. M., Akata, Z., & Lee, J. (2022). Large loss matters in weakly supervised multi-label classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14156-14165).
---
Rebuttal Comment 1.1:
Title: Thanks for addressing my concerns
Comment: In light of the additional experimental results presented and the response to my other questions, I am raising my score. | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their remarks and suggestions, which will allow us to improve the quality of the paper.
We summarize here the main changes that will be made to the submission in a next revision, in accordance with the reviewers inputs. Please refer to the individual responses for more detailed comments about those changes, as well as answers to the questions.
- We provide results on two additional sound source localization (SSL) datasets; REAL [1] and DCASE19 [A] in Tables A and B of the rebuttal. The manuscript will be updated accordingly [Reviewer yLZS].
- We will add new insights about the resilience of the proposed rMCL model in the presence of label noise and outliers (See Figure B of rebuttal) in the Supplementary [Reviewer YyHX].
- We will also expand on the motivations for the probabilistic interpretation and include the discussion in the main paper [Reviewer YyHX].
- A discussion regarding the collapse problem will be added in the Supplementary material [Reviewer yLZS].
- The interpretations of the metrics used, the EMD and Oracle will be clarified in the paper [Reviewers YyHX and 3WKf].
- Related work will be improved [Reviewer yLZS].
[A] Adavanne, S., Politis, A., & Virtanen, T. (2019). A multi-room reverberant dataset for sound event localization and detection. arXiv preprint arXiv:1905.08546.
Pdf: /pdf/e0ca83b299846c717fb0b032c29775ae3a26a076.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning and processing the ordinal information of temporal sequences in recurrent neural circuits | Accept (poster) | Summary: In this manuscript the authors use a custom training regime to force simple recurrent neural networks (RNNs) to learn the ordinal structure of sequential inputs. Specifically, they train the network to learn the order of sequences by presenting the elements of a sequence with variable durations and variable intervals between elements. They demonstrate the utility of these networks in both a toy transfer learning task and a more realistic "key-word spotting" task.
Strengths: Strengths:
1. The manuscript presents the key ideas in a straightforward and lucid manner
2. The core idea – training RNNs to recognize the ordinal structure of its inputs through varying the duration of its constituent elements – is interesting with potential downstream applications in machine learning and computational neuroscience
Weaknesses: Weaknesses:
Major:
1. The results in Figure 3 are difficult to interpret without a more systematic bank of control models. For instance, in the transfer learning task, as far as I can tell the authors compare transfer learning of their particular model to a model with no transfer learning. This does not tell us if the tree structure learned by their model is doing the heavy lifting. Specifically, have the authors compared their "tree-structure" model to an RNN that learns the task without using tree structure? Without this key control we cannot assess whether tree structure is what confers performance in this task, or if it is simply a matter of their model having been trained to solve *any* task prior to transfer learning.
1. I am a bit confused about what Figure 4 is showing. Here, the authors train their network using varied intervals between "chunks" of a given word. One issue that I can see with this training regime is that, in addition to enforcing tree structure in network dynamics, it is also presenting a "warped" version of the inputs. In other words, does inserting a delay between phonemes create spectral structure more similar to the test inputs than the phonemes *without* gaps? If true, this would imply that the network is simply seeing training data more similar to the test data, rather then leveraging tree structure. Further controls and analysis would alleviate this concern.
Minor:
1. Typos throughout, the manuscript needs a pass on the writing.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Can the authors provide more detail about the control models in Figures 3 and 4? My concerns could be simple misunderstanding.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable comments, which are very helpful for us to improve the paper. Below are our detailed replies.
Weaknesses
**On the contribution of tree-structured attractors**
Thank for raising this concern. To address this concern, we conducted an additional experiment (see Fig.R2-C in the uploaded reply PDF), in which we trained a control model having the same network architecture, except that the cross-entropy loss, rather than the ramping target function, was used. By this, the control model learned the sequence discrimination task, but no longer acquired the tree-structured attractors. We then applied the control model to the same transfer learning task (following the same protocol of freezing the connections in the recurrent and read-out layers). As shown in Fig.R2-C, our model exhibits significantly accelerated learning speed compared to the control model. This experiment demonstrates that the tree-structured attractor dynamics (the schema) is indispensable for fast transfer learning. Together with our other analyses in Fig.3A and Fig.S3 (which verify that the attractor template is indeed reused during transfer learning), we conclude that the tree-structured attractor dynamics is critical for fast transfer learning.
**On the control of training data in warping performance**
Thanks for raising this concern. To clarify this concern, we conducted an additional experiment, in which we built a control model which just uses warped sequences as the training data. This makes the training and test data even more similar than that in our model. We adopted the cross-entropy loss to train the control model, such that the recurrent network did not acquire proper tree-structured attractors. We found that when the temporal warping value falls in the range of [1, 1.5], i.e., the range of the lengths of training sequences, both models exhibit similar good test accuracies. However, when the warping value is outside of the training range, our model outperforms the control model significantly (see Fig.R2-D in the uploaded reply pdf). This control experiment excludes the possibility that the similarity between training and testing data leads to the good performance of our model, and it supports that the tree-structured attractor dynamics contributes to the robustness of our model to warped sequences. The underlying mechanism can be attributed to two factors: 1) attractors ensure the robust responses of our model to stretching/compressing inputs; 2) the tree-structured attractor dynamics further average out noises over the sequence, behaving like an evidence-accumulation process (see Fig.4C).
Minor
We will thoroughly improve the writing in the revised manuscript.
Questions
**On the control models in Figures 3 and 4**
In Fig.3, the control model shares the same network architecture, data augmentation and target function as our model. The only difference is on the training protocol. In our model, the connections in the recurrent layer, which store the tree-structured attractors, are frozen during the transfer learning, and only feedforward connections are updated; whereas, in the control model, both recurrent and feedforward connections are re-trained (more details see lines 209-214 in the supplementary materials). This comparison allows us to demonstrate the effect of the schema on accelerating transfer learning.
In Fig.4, the control model shares the same network architecture as our model. In the control model, no data argumentation is used, and the cross-entropy loss is adopted. By this setting, the control model can learn a given set of sequences, but does not form the tree-structured attractor dynamics. This comparison is to demonstrate that the tree-structured attractor dynamics enhances the robust of our model to warped sequences. However, as pointed out by the reviewer, since no data argumentation is used in the control model, it does not exclude the possibility that the improved performance of our model comes from data argumentation, as it tends to make the training and testing data more similar. To clarify this concern, we train another control model as described above (see Fig.R2-D in the reply PDF), which directly uses warped sequences as training examples, and hence it makes the training and test data even more similar than our model. Again, we observe that our model outperforms the control model to unseen warped sequences. This further strengthens our conclusion that the learned schema facilitates the robustness of our model to warped sequences.
We will expand the descriptions about control models (including the new ones) in the revised manuscript to make them more clearly.
We hope that we have addressed all the concerns of the reviewer and could convince the reviewer to raise the score.
---
Rebuttal Comment 1.1:
Comment: The authors have adequately addressed all of my concerns, and I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your improved score and positive feedback on our paper. We greatly appreciate your comments as they help us to further refine our work ! | Summary: This paper investigates how recurrent neural circuits learn to represent the abstract order structure of temporal sequences and how the disentanglement facilitates sequence processing. The main objectives were better understand the brain's mechanisms for representing temporal sequence ordinal information and contributing to the development of brain-inspired sequence processing algorithms.
The authors found that given suitable training, a recurrent neural circuit can learn tree-structured attractor dynamics to encode the tree-structured orders of temporal sequences. They show that reusing a temporal order template aids the learning of new sequences sharing the same or partial ordinal structure and that the tree-structured attractor dynamics improve the robustness of temporal sequence discrimination.
Strengths: The authors demonstrated that in a supervised learning task, recurrent neural circuit can learn tree-structured attractor dynamics to encode the corresponding tree-structured orders of temporal sequences. They used a transfer learning task to show that once the network has learned the temporal structure, it can apply that knowledge to different temporal inputs - this was demonstrated by freezing the recurrent weights and training only weights in a feedforward layer that followed the recurrent layer. They also showed that data augmentation can lead to invariance to temporal rescaling. These results are consistent with several neuroscience studies.
Weaknesses: The evaluation is relatively limited and uses only short sequences and supervised learning. If the brain indeed uses a similar mechanism, then it should successfully scale to much longer sequences.
If the model is capturing cognitive properties of human sequence memory, then it should also be consistent with results from behavioral studies, which are characterized by effects such as primacy, recency and contiguity.
The authors might consider reflecting on or comparing their approach with other work related to modeling sequences in the brain, for example:
Cui et al. 2016 Continuous online sequence learning with an unsupervised neural network model
Graves et al. 2014 Neural Turing machines
Voelker et al. 2019 Legendre memory units: Continuous-time representation in recurrent neural networks
Eliasmith et al. 2013 A large-scale model of the functioning brain
Howard et al. 2014 A unified mathematical framework for coding time, space, and sequences in the hippocampal region
Whittington et al. 2020 The Tolman-Eichenbaum machine: unifying space and relational memory through generalization in the hippocampal formation
Also, for recent work on biologically inspired neural networks robust to temporal rescaling see
Jacques et al. 2022 A deep convolutional neural network that is invariant to time rescaling
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: When you used data augmentation and varied the duration between neighboring items, how many steps of BPTT did you have to do and have you encountered issues with vanishing/exploding gradients?
The authors reflected on the biological implausibility of backpropagation and proposed the possibility of using the fast Hebbian rule. It is not clear whether this could work in the context of backpropagation through time since it would require that the biological system stores a copy of the temporal input. Do the authors have a possible solution for this?
Do you think the observed properties could emerge if the training was done in a self-supervised way?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors addressed some of the limitations, especially regarding the backpropagation (but see my earlier comments on how I suggest expanding those).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable comments, which are very helpful for us to improve the paper. Below are our detailed replies.
Weaknesses
**On the scalability of the model**
Thanks for raising this important issue. As the first step of presenting the framework, we only evaluate the model with short sequences. The study of long sequences will be added in future work. Here, we would like to point out that our model has the capacity of processing long sequences, and it can achieve this in two biologically plausible ways.
First, the brain can combine short primitive templates to form long ones. To demonstrate this idea, we conduct a preliminary experiment, in which the network learns to represent sequences of length 3 by dynamically combing two shorter templates of length 2 (see Fig.R1 in the uploaded reply pdf).
Second, the brain can employ a hierarchical network to combine ordinal templates in each layer hierarchically. Notably, this way of combining primitive templates to encode long sequences was proposed by Chomsky for language processing, the so-called the minimalist program [1]. The program suggests that our human brain is capable of processing arbitrarily complex and lengthy language sentences through recursive dynamic binding of primitive linguistic units.
**On the cognitive properties of human sequence memory**
Thanks for the suggestion. The effects of primacy, recency, and contiguity are often reported in sequential working memory tasks, and a potential computational mechanism is the short-term plasticity (STP) of synapses [2]. Here, our focus is on studying the learning of tree-structured attractors (the schema), but we presume that if we also consider a sequential working memory task and include STP in feedforward connections, we should observe similar effects.
Alternatively, we may compare our model to those schema-related experimental studies. For example, a recent study [3] found that rats could store the odor sequence structure as a schema by using a low-dimensional neural code in the orbitofrontal cortex, and this schema facilitated learning of new similar tasks. Our model also shows that the ordinal structure is encoded as tree-structured attractors in a low-dimensional space, and this attractor dynamics facilitates transfer learning. In future work, we will compare our model with the experimental data in details.
**On the differences to other papers**
Thanks for providing these references. We checked all of them and summarized their differences to our work as below:
The works by Cui et al. (2016), Graves et al. (2019), and Eliasmith et al. (2013) addressed sequence modeling using different ways, such as HTM with an unsupervised Hebbian rule, external memory modules, and cognitive architectures. These models differ from our work on that they do not explicitly disentangle ordinal structures from contents. Whittington et al. (2020) proposed TEM to disentangle structure from contents and integrate them via conjunctive coding. The self-supervised method used by TEM may learn a tree-like attractor structure in an unsupervised manner. However, TEM does not tell how to store the learned sequence structures and reuse them for new tasks. The LMU model by Voelker et al. (2019) efficiently captures long temporal dependencies. Our method can be applied to train the LMU model for learning complex tree-like structures, and potentially further leverage the LMU's efficiency in capturing long dependencies. Jacques et al. (2022) proposed a new deep convolution network that utilizes logarithmically compressed temporal representations, but does not consider the extraction of disentangled ordinal structure.
Overall, our model is fundamentally different from these works on that we explore how the brain learns the disentangled ordinal structure (schema) by employing tree-structured attractor dynamics, and how this schema facilitates transfer learning. Nevertheless, our model is not contradictory to these works, rather they can be integrated together to perform complex sequence tasks.
Questions:
1. We typically performed 90 to 150 steps of backpropagation through time (BPTT). To mitigate the potential vanishing or exploding gradients, we employed gradient clipping and carefully initialized the weights. Thus, we did not encounter vanishing/exploding gradients in practice.
2. As the fast Hebbian rule is differentiable, it can be applied in the context of backpropagation through time (BPTT). Recent works in both neuroscience and machine learning society have explored this issue [4,5,6,7]. In our model, we can replace the static feedforward connections from inputs to the recurrent network with context-controlled fast Hebbian weights and train the model using meta learning methods [8]. In such a way, context information can modulate feedforward connections, enabling fast binding or rebinding between contents and ordinal templates. Thus, the integration of fast Hebbian rule and BPTT may facilitate the learning and utilization of the tree-structured attractor dynamics.
3. Yes, it is possible that the tree-like attractor structure emerges in the network through self-supervised learning. A possible solution is that: we first apply a self-supervised method [9] or a quantized representation method [10] to chunk raw input sequences into discrete items; we then apply a self-supervised learning method, such as the contrastive predictive loss [11], to train the recurrent network to get the tree-structured attractor dynamics.
References:
[1] N. Chomsky, MIT Press, 1995
[2] Mi et al, Neuron, 2017
[3] Zhou et al, Nature, 2021
[4] Ba et al, Neurips, 2016
[5] Thangarasa et al, ICML, 2019
[6] Tyulmankov et al, Neuron, 2022
[7] Dekker et al, PNAS,2022
[8] Wang et al, Current Opinion in Behavioral Sciences, 2021
[9] Asabuki et al, Nature communications, 2020
[10] Van Den Oord et al, Neurips, 2017
[11] Oord et al, arXiv, 2018
---
Rebuttal Comment 1.1:
Title: Reviewer response needed
Comment: Hello Reviewer,
The authors have endeavoured to address your comments in their rebuttal. The rebuttal phase is a key part of the NeurIPS review process. I invite you to read and respond to the author's comments as soon as possible, latest tomorrow, to give everyone time to continue and conclude the discussion.
Thank you for helping make NeurIPS a great conference for our community.
---
Rebuttal Comment 1.2:
Comment: Thank you for providing a detailed response. I don't have any additional questions. | Summary: This paper describes a method for training RNNs that is used to extract ordinal sequences. There are two variations on the training that make this possible. First, the network is trained on sequences with a wide range of temporal delays, so that only ordinal position is relevant. Second, the training signal is the location of each sequence on a tree structure given a priori to describe the set of sequences.
The model is tractable and interpretable and two classes of findings are described. First, networks that have learned a tree structure for a particular problem can generalize rapidly to new problems with the same structure by freezing the recurrent weights and relearning the output weights. Second, the model is tested on time-warped versions of a set of spoken words and it generalizes better than a control model.
Strengths: The exposition is extremely clear.
Weaknesses: The requirement that the training set provides information about the true ordinal structure seems very strict.
The requirement for training on a wide range of temporal intervals is a serious limitation as a model of the brain.
I find the colored lines in Fig 1A,C very difficult to distinguish.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Under what circumstances would we expect the network has access to a training signal with information about the ordinal position in a tree structure? What is the use case for this network? How could one discover that structure without training it in?
A simple way to build in a model of ordinal position is to have an RNN where the rhs is modulated by a gating factor:
$
dx/dt = \alpha(t) [ \ldots ]
$
where $\alpha(t)$ can be learned. If $\alpha(t) = 0$ between relevant triggering stimuli, one can say it's learned an ordinal code. Is that possible in a GRU?
How does this model compare in its time warping performance to classic algorithms (e.g., Sekoe & Chiba, 1978)? On a related note, a recent approach (Jacques, et al., 2022, ICML) shows effectively perfect generalization over a wide range of warping factors without data augmentation. How does this model relate to that work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable comments, which are very helpful for us to improve the paper. Below are our detailed replies.
Weaknesses
**On the prior of the ordinal structure**
Thank for raising this concern, but we would like to point out that this is not a problem for the brain, although it may be a concern for machine learning tasks.
First, experimental studies have shown that in the brain, continuous sequences are chunked into discrete items to support high-level cognition [1]. For examples, speech sequences can be hierarchically chunked into words and syllables [2]; neurons in the hippocampus have been shown to detect event boundaries when watching movie videos [3]. This chunking process naturally produces ordinal structures of temporal sequences. Computational models are also proposed for neural chunking, such as self-supervised learning [4] and oscillation [5].
Second, the development path also indicates that the brain implements sequence chunking. For example, in language acquisition, young children learn primitive phonemes around 2 months [6], learn sequences of either ABB or ABA form around 7 months [7], and learn to recognize spoken words around 12 months. Here, phoneme learning serves as a building block for word learning, defining the ordinal structure of language sequences.
Since our focus is on exploring how the brain learns the ordinal structure disentangled from contents, the so-called schema which is receiving increasing attention in theoretical neuroscience [8], we assume that the input sequences to our model have already been chunked in advance. While when applying our model to machine learning, it does need a method to first segment sequences based on the statistics of items.
**On the wide range of temporal intervals**
Thank for raising this concern, but we would like to argue that this is not a serious limitation to the brain.
First, in the brain, motor and speech sequences are generated with large variability in speed, and they exhibit large variability in separations between motor motifs and speech chunks. This large variability enables the brain to learn the tree-like attractor structure.
Second, in our network training, we actually do not need very large intervals. For the clean synthetical data, we can actually train the tree-structured attractors using fixed intervals (see Fig.R2-A in the reply pdf). For noisy spoken words, we do need an amount of variations in intervals to achieve good performance, but the range is only about 2 times of the item length. Overall, it does not need the range of temporal intervals to be very large.
Questions
**On the available of ordinal structure, the use of the model, and un-supervised learning**
1) As replied above, the brain can access the ordinal structure through chunking. Experimental data has shown that continuous sequences such as motor and speech sequences are hierarchically chunked to form discrete sequence representations.
2) Our network can be applied to model the learning of abstract ordinal structures of temporal sequences (schema), such as motor sequences, speech, and language. For example, a potential application is to model language appreciation (certainly many more works are needed), where primitive ordinal structures are stored to facilitate language understanding. Our model may also be applied to solve transfer learning in machine learning.
3) To discover sequence structures without supervised signals, we may mimic the chunking mechanism in the brain. Incorporating a generative objective in unsupervised learning may also aid in extracting sequence structures. The chunking mechanism may use a self-supervised loss [4] or quantized representation trick [9]. Generative objective loss may employ the contrastive predictive coding method [10] or some other self-supervised methods.
**On GRU**
Sorry, we do not get the meaning of the reviewer, “A simple way to build in a model of ordinal position is to have an RNN where the rhs is modulated by a gating factor…”.
In term of GRU, it can indeed learn the tree-like attractor structure if our learning protocol is used. The performance of GRU after learning tree-like attractors is shown in Table 1. We can visualize its tree-like attractor structure of neural activities in GRU, see Fig.R2-B in the reply PDF.
**On the warping performance**
Thanks for pointing out these two references we have missed. We did not evaluate our model intensively on the warping performance. This is because that the goal of the current paper on exploring how the brain learns the disentangled ordinal structure (schema). Since the key of schema is on transferring learning, we have focused on investigating this issue. In future work, we will systematically evaluate the warping performance of our model and compare it with other methods.
Considering the differences to the two works: 1) Sekoe & Chiba proposed a machine learning kind of method for representing the ordinal structure, while our method focuses on the neural representation of the ordinal structure; 2) the model of Jacques et al. was inspired by time cells in the brain, while our model was inspired by the disentangled representation of ordinal structure, manifested as tree-structured attractors. Jacques’ model may perform well for recognizing a given set of warping sequences, it likely has difficulty for transfer learning, since it does not extract the schema as done in our model.
We hope that we have addressed all concerns of the reviewer and could convince the reviewer to raise the score.
References:
[1] Dehaene et al, Trends in Cognitive Sciences, 2022
[2] Dehaene et al, Neuron, 2015
[3] Hahamy et al, Nature neuroscience,2023
[4] Asabuki et al, Nature communications, 2020
[5] Giraud et al, Nature neuroscience,2012
[6] Kuhl et al, Nature reviews neuroscience,2004
[7] Marcus et al, Science,1999
[8] Goudar et al, Nature Neuroscience,2023
[9] Van Den Oord et al, Neurips, 2017
[10] Oord et al, arXiv, 2018
---
Rebuttal Comment 1.1:
Comment: I have read and considered the author's response. | Summary: The canonical biological neural circuit model, described by equation (1) in this work, primarily relies on attractor dynamics to perform cognitive tasks involving temporal sequences. Facilitating the emergence of appropriate attractors during training is a difficult task that challenges neuroscientists even today. The authors alleviate this problem by first training RNNs on simple abstract tasks to allow tree-structured attractor templates to appear within the network, and then further training the networks on more complex tasks, such as a key-word spotting task which will effectively reuse the existing templates. This idea is similar to the emerging field of 'schemas' in network neuroscience.
On a technical standpoint, I am convinced that this work represents novel and significant advances in the field of biological RNNs, specifically in the subfield of schema formation. However, I have certain doubts on the neuroscientific elements of this work, especially since the paper is thoroughly cited with neuroscience literature. None of these concerns are a major detriment to my evaluation, and hence I recommend an accept for this work. However, I feel that the authors can considerably improve their discussion and neuroscientific arguments regarding the reusage of tree-structured templates. As such, I hope the authors can adequately address my concerns in the weaknesses section to further solidify this work.
Strengths: The model and training methods have been extensively explained, which makes their objective clear. The underlying intuition that is being conveyed is straightforward and easy to understand, which is the direct consequence of a well-written introduction from pages 1 to 4. Subsequent sections retain the same quality of writing and scientific rigor.
Weaknesses: As mentioned before, the key issues that I have about this work stems from neuroscientific plausibility. I do not have specific questions or requests regarding each point, and thus I would recommend the authors simply address this points by arguing for or against these statements.
1. It seems like in this work, the network needs to forget about the previous task in order to use an existing tree-structured template. This seems unrealistic for such a model, and seems to suggest a slot-based usage for such attractors.
2. There exists a lesser-known question in biological RNNs: suppose a network is tasked to remember 3 two-digit numbers (including 00) in sequence. The number of attractors required for this task would be $100^3 = 1,000,000$, which is exaggeratingly large for such a simple task that most humans are able to do. This deters the approach of using tree-structured attractors.
3. Discrete tree-structured attractors are also unable to account for continuous variables, such as color, intensity, speed, temperature etc. Yet, there is no simple generalization for this work to account for continuous variables.
4. Reusing a template also implies that the depth of the tree is fixed. While a tree can indeed grow deeper after subsequent training on more complex tasks, or become shorter due to lack of usage, it does not agree with the logical viewpoint that cognition is highly adaptive to sequence length at much shorter timescales compared to actually learning and developing tree-structures of modified depths.
5. Tree-structures may also lead to unintended behaviors if the wrong inputs are provided, especially if the inputs are not within the scope of the task. This leads to involuntary usage of these tree structures, thus presenting an inherent inflexibility in tree structures.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: I was not able to find explicit statements regarding limitations, although the typical shortcomings of a biological RNNs have been addressed as future research directions in lines 294-309. Points addressed in the weaknesses may possibly be added as limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable comments, which are very helpful for us to improve the paper. Below are our replies the comments point-by-point.
On weaknesses:
1. Thank for raising this important issue. Shortly speaking, reusing an existing tree-structured template for a new task in our model does not need to forget about the previous task. In our model, we consider a recurrent network reserved to store tree-structure templates, a kind of slot-based memory. This is motivated by experimental findings which indicate that the brain recruits independent resources to store ordinal structures of temporal sequences disentangled from contents [1]. The advantages of this disentangled ordinal structure representation are that: 1) it saves the overall resource for representing a large number of temporal sequences sharing ordinal structures; 2) it allows the brain to reuse the shared ordinal structures, the so-called schema, to process different sequences flexibly. Once upon a temporal sequence arrives, the neural system rapidly binds the ordinal template with the contents. To process a new task, the neural system needs not to forget about the previous one; rather the neural system learns a new set of feedforward connections from the sensory cortex through dynamical binding. It has been explored previously that dynamic binding can be implemented in the brain through different means [2], including the fast Hebbian rule [3] and neural oscillation [4]. Notably, in the language domain, this schema idea is consistent with the minimalist program in linguistics proposed by Chomsky, which suggests that the brain stores many fundamental syntactic structures and is able to generate arbitrary complex syntactic trees via combining and reusing these primitive structures, referred to as merge, to realize fast language acquisition and appreciation [5].
2. The reviewer raised an interesting issue about the memory storage problem. Actually, disentangling ordinal structure from contents as a schema is a way for the brain to avoid the memory storage explosion problem. For a large number of sequences sharing the same ordinal structure, the brain only needs to store the ordinal template; while when processing a sequence with specific contents, the brain can quickly bind the template with the contents via dynamical binding.
3. We agree that our model can not account for continuous variables, but this is not a problem for the brain. A large volume of experimental studies has shown that the brain employs a different type of network model, called continuous attractor neural networks, to process continuous variables, such as head-direction [6] and spatial location [7]. This is not contradictory to our proposal of using tree-structured attractors to represent the ordinal structure of temporal sequences formed by discrete items.
4. Thank the reviewer for raising this important issue we have not addressed adequately in the original manuscript. Actually, our model has the capability to adapt to processing temporal sequences of varying lengths. First, for a sequence shorter than the template, our study has already demonstrated that our model can adapt to this, see Fig.3C-D in the manuscript, in which the network learned new sequences only covering part of the stored tree structure. Second, for a sequence longer than the template, our model can learn to combine shorter primitive templates to form a longer one, a strategy similar to the one proposed by Chomsky for language processing [5]. To demonstrate this idea, we conduct an additional experiment, in which the neural system learns to process sequences of depth 3 by combining two primitive templates of depth 2, see the new Fig.R1 in the reply pdf.
5. Thank the reviewer for pointing out this interesting issue which actually highlights an advantage of our model. In the conventional heteroclinic channel model, each node is an unstable saddle point, which can lead to unintended usage of the tree structure if the initial input is wrong. However, in our model, each node is a stable attractor, and to move to the next node, it needs the next input item to be correct in order to induce the transition (see lines 186-189 in the manuscript). As a result, our model displays the event-driven and evidence-accumulation behaviors (see Fig.4C-D in the manuscript). Overall, our model is rather robust to noises as it employs a sequence of attractors to represent the information.
On limitations:
We appreciate the reviewer's insightful comments and will add discussions about the limitations as suggested by the reviewer in the revised manuscript.
References
[1] Dehaene et al, Neuron, 2015
[2] Engel Trends in cognitive sciences,2001
[3] Bittner Science,2017
[4] Klimesch et al,Neuroscience & Biobehavioral Reviews, 2010
[5] N. Chomsky, MIT Press, 1995
[6] Kim et al, Science, 2017
[7] Giocomo et al, Neuron, 2011
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the reply.
> disentangling ordinal structure from contents...
In general, I feel like the authors are defending the idea of disentangled representations in points 1 and 2, rather than the methodology in this paper. In this work, the authors retrain input and output weights in order to re-use tree structures. This is motivated by the idea of disentangled representations, which is a similar high-level concept, but they are fundamentally very different. For example, there is no mention of proposed mechanisms for "binding" in this context. Specifically, in the aforementioned task to remember 3 numbers, it is not possible to "bind" a number to an attractor simply by training a new set of output weights whose purpose is to decode 1 of 100 numbers.
Quick update 5 minutes later: I am referring to training a network on a task to remember 3 arbitrary numbers, not 3 specific numbers such that it is possible to train input weights to bind them to attractors.
> for a sequence longer than the template, our model can learn to combine shorter primitive templates to form a longer one
I thank the authors for the additional results which has convinced me of their effectiveness.
> highlights an advantage of our model
What I mean is that an unintended input will cause dynamical systems to detract to unwanted stable attractors and stay there. I understand that the authors are trying to say that stable attractors are resistant to unintended inputs, but at this point I believe this discussion is too abstract and I will not consider the good or bad of this for my decision.
I thank the authors again for the response and for the reasons above I will keep my current score, but I optionally invite the authors to respond to this for other reviewers and AC if there are any pressing points to be made (which I will also read).
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind and insightful comments. Please let us explain further.
Our emphasis on disentanglement stems from the fact that the learned structured attractors offer the advantage of facilitating the learning of new tasks with ease, along with the ability to effectively generalize across a wide range of sequential task classifications.
Regarding point 1:
For example, when learning a new 4-class phoneme sequence task, which includes sequence elements such as "pcl", "pau" and "dcl", and shares the same task structure with a 4-class synthetic sequence task (illustrated in Fig.1 of our manuscript), we can simply freeze the learned recurrent and readout connections while concentrating solely on learning new feedforward connections between phoneme inputs and the recurrent network, using backpropagation through time. As the synthetic task and phoneme task have different feedforward connections, the network can both perform the synthetic and phoneme tasks.
Regarding point 2:
Consider a simplified scenario in which we exclusively employ templates of depth 2, to address the challenge of a 100^3-class sequence classification task involves a two-fold process. Firstly, we learn a multitude of attractor templates independently, each with a length of 2, utilizing our methods. This compilation of templates serves as a reservoir of schemas. And this reservoir receives inputs and generates predictions through a linear layer as in Fig.R1 in the reply pdf, Secondly, we can only learn the input-output connections while preserving the recurrent connections in schemas. Distinct attractors from different schemas are able to represent sequence variables in a distributed manner. In an ideal scenario, 50 attractors from 50 schemas could collectively represent a formidable 2^50 variables. Similarly, attractor trajectories within each template also contributes to representing diverse sequence trajectories in a combinatorial manner. Consequently, the demand for a substantial 100^3 attractors is circumvented.
Regarding point 5:
In the case that an unintended input leading unwanted stable attractors, we agree that our model may fail. However, it is worth noting that such instances may indicate the presence of corrupted sequence inputs fraught with substantial noise. This could similarly lead to failure in other sequence models, including reservoir networks or heteroclinic channels. To address this challenge, a plausible solution may involve the implementation of a hierarchical recurrent network, where neurons in the high-level layer can integrate lengthy sequence elements and help to rectify unintended behaviors.
We thank the reviewers for their valuable criticism, which we intend to incorporate into our future studies. | Rebuttal 1:
Rebuttal: We appreciate the valuable comments from all reviewers, which are very helpful for us to improve the work. We have addressed all concerns of the reviewers point-by-point.
Attached please find the supplementary figures to answer the concerns of reviewers.
Pdf: /pdf/15b198b189886101d56fdebb1b92add037b6ab10.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DiffVL: Scaling Up Soft Body Manipulation using Vision-Language Driven Differentiable Physics | Accept (poster) | Summary: The paper works on trajectory generation for soft body manipulation with differentiable simulation. To address the key challenge of representing task goals for optimization, the authors propose to use natural language descriptions to lower the barrier for annotation, where a framework that utilizes LLM for translating natural language into optimization programs is developed to facilitate data collection.
Strengths: The idea of task specification with natural language is an interesting idea;
The overall framework seems to be reasonable and technical sound, it is smart to leverage LLM for converting natural language descriptions into optimization programs;
The paper is well-presenting and easy to read.
Weaknesses: More evaluations can be performed to validate the robustness of the trajectory generation;
There is no discussion of the failure cases;
Some missing citations in line 21;
Typo in figure 4: smapling -> sampling;
Some works that leverage natural language for task goal specifications, which may need to be discussed in the related work:
[1] StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects, ICRA 2022;
[2] Differentiable Parsing and Visual Grounding of Natural Language Instructions for Object Placement, ICRA 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In section 4.3, the authors say that a sampling-based RRT planner is used to determine the path for the actuator, and then the trajectory is refined with a gradient-based optimizer. Could the authors provide more details on how the trajectory is initially calculated and how it is refined for execution? Besides, it would be better to provide more snapshots for the rollouts to show the difference between the initial trajectory and the final ones;
Would the trajectory optimization be sensitive to the initialization? It would be interesting to conduct some experiments for demonstrating the robustness of the initialization strategy;
Are there any failure cases that are caused by the LLM translation? It would be good to understand the failure mode in which situations the LLM failed to generate appropriate optimization programs.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed in the paper, but no failure cases are presented and analyzed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thorough assessment and positive feedback concerning our paper. It is genuinely rewarding to learn that you recognize the significance of our distinctive task representation, the novel method to integrate LLM, and the overall clarity evident in our writing. Your insights are invaluable to us.
> Could the authors provide more details on how the trajectory is initially calculated and how it is refined for execution?
We clarify that our approach does not use a gradient-based optimizer to optimize trajectories generated by the RRT planner. Motion planning and optimization occur in separate temporal phases. Initially, we plan and execute a trajectory to position the actuator at the specified initial pose. Subsequently, we initialize a new trajectory starting from this pose, and the actuator remains fixed in this trajectory as we initialize the policy with zero actions to maintain its position. We then only optimize the second trajectory to operate the actuator to complete the task. The optimizer does not touch the previous trajectory and keeps it unaltered.
> More evaluations can be performed to validate the robustness of the trajectory generation & sensitivity to the initialization
We emphasize that our vision-language task representation is designed to provide a good initial pose for trajectory optimization. The `sample` function generates poses in line with the annotators' guidance, and the optimization trajectory starts with zero actions. This yields a robust optimizer for different optimization programs.
The table below illustrates the solver's performance across several sampled tasks. The standard deviation, calculated across five seeds, highlights that the observed variance is not significant.
| | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 |
| ----------------- | ------- | ------- | ------- | ------- | ------- | ------- |
| Average IOU Score | 0.575 | 0.453 | 0.684 | 0.529 | 0.474 | 0.604 |
| Variance | 0.026 | 0.029 | 0.052 | 0.039 | 0.016 | 0.007 |
> Discussion of the failure cases; failure cases caused by the LLM translation
Although GPT4 already outperforms GPT3.5 in the translation task, we observed several failure cases in the translation process. For example, the output of GPT4 will contain objects that do not exist. In a task that lifts a sphere above a box, the output may contain an invalid sentence like `require(similar('sphere', goal('sphere_above_box')))` where `sphere_above_box` which doesn't correspond to a proper object. It may also generate a statement with a type error. For example, sometimes it may output `require(similar('sphere', y(max(pcd('box')))))`, which would trigger a compilation error as we can not compute the shape distance between an object and a coordinate. Fortunately, those compilation errors could be detected during the compilation and can be resolved by resampling. However, there are also rare cases where LLM removes or adds more constraints to the instructions. For example, it may ask the solver to fix the position of an object that needs to move. In these cases, our solver may fail to find a suitable solution. Restricting the output to fit a particular syntax, for instance, using context-free grammars would resolve such problems~[1]. We will conduct a more comprehensive analysis of these failure cases and elaborate on them in the revised manuscript.
[1] Shin, Richard, et al. "Constrained language models yield few-shot semantic parsers." arXiv preprint arXiv:2104.08768 (2021).
> Some works that leverage natural language for task goal specifications, which may need to be discussed in the related work
Thanks for pointing out the related literature. Our work aligns closely with leveraging natural language for task goal specifications. Our vision-language task representation can be considered a richer extension of prior language goal representations in the following aspects:
- Our vision-language task representation introduces a temporal dimension, offering a more detailed description of the *physical process* rather than just the *scene*. The description of the process aids the solver in synthesizing the trajectory beyond single-goal guidance.
- We provide the GUI tool to directly create vision goals instead of only relying on language, yielding a more intuitive and accurate means of specifying goals.
- This flexibility empowers us to focus on low-level physics and manipulate soft bodies effectively.
We believe it is possible to integrate ideas from language-guided goal specifications into our work. We look forward to exploring the synergy between these approaches and will include more discussion in our revised manuscripts.
We will improve our writing and polish the manuscripts. If you have any further questions, please do not hesitate to ask.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Thanks again for your suggestion to strengthen this work. As the rebuttal period is ending soon, we wonder if our response answers your questions and addresses your concerns. If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback
Best,
Authors
---
Rebuttal Comment 1.2:
Title: POST-REBUTTAL
Comment: Thanks to the authors for their updates. The supplementary technical details effectively address the inquiries I had, leading me to revise my rating to a "weak accept." | Summary: This study focuses on soft-body manipulation problems such as flattening dough using a rolling pin, cutting deformable objects, and more. It introduces a method that engages non-expert users to provide detailed annotations and identify sub-goal states within key frames of the task video. Despite requiring additional human intervention, this approach generates a rich learning signal for manipulation tasks involving deformable objects. This significantly eases the learning difficulty associated with such tasks. The proposed method has been evaluated based on six fundamental deformable object manipulation skills, demonstrating its effectiveness.
Strengths: This work proposes a unique representation of task specifications. It incorporates human-labeled annotations for deformable object tasks to shape an objective/reward function. Such a reward function provides far richer reward signals than just a goal state, potentially greatly reducing policy learning difficulties.
Furthermore, this study offers a new dataset that includes 100 deformable object manipulation tasks, complemented by human annotations on key frames. This contribution could be beneficial to the broader community and greatly aid studies related to deformable object manipulation.
The presentation of this research is both clear and organized.
Weaknesses: Human annotation offers a valuable reward signal, which could potentially simplify the learning process. However, it also imposes certain constraints on the policy strategy. When states deviate from the demonstrated trajectory, adhering strictly to specific action commands could exacerbate the situation. Hence, at its current stage, it lacks the flexibility needed to handle unseen states not accounted for in the instructions/annotations.
Moreover, each task necessitates considerable human involvement in creating subgoal states and corresponding annotations. Although software solutions have been developed to assist in this process, it still represents a significant involvement of human resources and incurs considerable costs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1: How is a key frame chosen? Also, can you explain how the decomposition of long-horizon tasks occurs?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comprehensive evaluation and positive feedback on our paper. It's truly gratifying to know that you found value in our unique task representation, the novel manner in which we incorporated LLM, and the clarity throughout our writing.
> Imposes constrains on policy strategies; Strict adherence to action commands can worsen trajectory deviations; Lacks flexibility for unseen states outside annotations.
Thank you for bringing this issue to our attention. If we understand correctly, it refers to the case when our solver fails to reach a certain keyframe, making the reward signal in the subsequent stage useless. This might happen when users need to provide better annotations. We acknowledge it as a limitation of our current solver, and could in theory introduce a DAGGER-like approach to allow users to actively adjust annotations by adding or editing keyframes based on the found trajectory and offering corrective input. This integration of the annotation and optimization could provide a "closed-loop" control to help the agent recover from those unseen states.
> Necessitates considerable human involvement
Thank you for bringing this issue to our attention. While we recognize our current labeling system can be significantly improved, we would like to point out that our framework is already sufficient to help us build SoftVL100 -- a compact yet diverse dataset with many distinct tasks and high-quality annotations on how to achieve the desired manipulations. We believe considerable human involvement is a good thing and is synergetic with having an efficient annotation experience so that non-expert annotators can provide information-rich annotations for robotic tasks without unsatisfying overheads. We are confident that we can significantly reduce human efforts with enhanced toolchains. For example, using advanced tools such as Augmented Reality, 3D mice, and even language-based instructions can facilitate the operation of soft bodies. However, the further improvement of the toolchain lies in the realm of HCI and is beyond our scope. We would like to leave it as future work.
> How is a key frame chosen? Also, can you explain how the decomposition of long-horizon tasks occurs?
The selection of keyframes is determined by the annotators, as we believe humans can naturally decompose complex tasks into simpler ones for communication. When annotators manipulate a scene within the GUI, they can save the current scene as a key frame within the task representation. They can also add, modify, or delete key frames using the editors within the web server interface.
The annotators were instructed on what kind of decomposition would likely result in a working trajectory, i.e., segmenting the manipulation processes at contact point changes., which is a simple and effective way for the annotator to provide high-quality labels, without having any expert understanding of the physics and the solver. If the agent is required to manipulate a new object or the actuator needs to establish contact on a different face of the object, annotators would introduce a new keyframe. Additionally, we employ heuristic methods to segment YouTube videos. This segmentation occurs when there is a significant disparity between two frames, aiming to simplify the keyframe selection for annotators.
We hope our explanation help alleviate your concerns. If you have any further questions, please do not hesitate to ask.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thanks for the rebuttal. My concerns have been addressed. | Summary: This paper presents a novel approach to soft body manipulation employing the strengths of the Large Language Model (LLM). The key innovation is viewing tasks as data, with each data point consisting of an initial scene and an optimization objective. To tackle the challenge of task representation, the authors introduce DiffVL, a framework enabling non-expert users to define soft-body manipulation tasks to a differentiable solver using a combination of vision and language. The users specify tasks via an interactive simulator with a sequence of 3D scenes (keyframes) connected by natural language instructions. The authors also developed a corresponding GUI and curated SoftVL100, a vision-language dataset with 100 diverse tasks. They further developed a method that combines the power of a large-language model and differentiable physics to solve a wide variety of challenging long-horizon tasks in SoftVL100. This study's contributions lie in its new task representation, the developed GUI, the curated dataset, and the DiffVL method.
Strengths: The paper under discussion is a robust, well-rounded work, bringing to the table significant contributions across multiple facets. It excels in terms of novel methodology, tool development and proposes a new dataset:
• The authors introduce an innovative multi-stage vision-language representation. This new approach simplifies the definition of soft-body manipulation tasks, making it accessible for non-expert user annotations - a noteworthy contribution to research in this area.
• In terms of tool development, the authors have crafted a corresponding Graphical User Interface (GUI), enhancing the user experience and overall accessibility of their proposed approach. They have also curated SoftVL100, a compilation of 100 realistic soft-body manipulation tasks derived from online videos. This dataset stands as a valuable resource for further research and application in this domain.
• Moreover, the authors have devised DiffVL, a method that bridges the gap between a large-language model and differentiable physics. This combination of strengths is adeptly applied to tackle a wide variety of challenging long-horizon tasks presented in SoftVL100, which marks a significant advancement in the field.
The experimental section of the paper is another key strength, featuring a thorough ablation study.
Weaknesses: The relative weakness of the approach can be considered the amount of human labor required for dataset collection. Also, DSL could sometimes be too constrained to define arbitrary new tasks. Despite these points, they in no way diminish the overall quality of the work presented in the paper. The minor limitations identified simply highlight areas for potential future refinement. The project stands as an excellent example of research and tool development, and I am eager to see how it evolves.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: • It would be good to have more info about time constraints - simulation and training time for different tasks, not only training progress per epoch.
• IWere there any simulation failure cases? How challenging was simulator tuning for different tasks?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations were addressed fairly well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're thrilled about your positive feedback on our work. Your acknowledgment of our innovation, robustness, and solid foundation is truly gratifying. Your encouraging words inspire us to pursue excellence in all our efforts. Thank you.
> the amount of human labor required for dataset collection
We agree that reducing the amount of human labor is an important research direction. We foresee that better UI design, although it is not our core contribution, can significantly improve the annotation process with more attention on the HCI frontier. For example, we can develop better tools like augmented reality (AR), 3D mice, and even language instruction. Our current workflow, involving keyframes and natural language, demonstrated how to construct the appropriate data format (namely, keyframes and natural language annotations), along with the UI system, which makes it feasible to both (a) enable end users to provide high-quality annotations and (b) having the data be readily usable by machines. We hope our work can bridge the realms of HCI and robot learning and bring better synergies between humans and robots.
> DSL could sometimes be too constrained to define arbitrary new tasks.
We acknowledge that our DSL is not designed to cover arbitrary natural language. For instance, our DSL assumes a fixed number of stages and often deals with one object per stage, making it less suitable for scenarios involving multiple objects moving simultaneously. Additionally, the requirement to specify exact time makes it challenging to implement temporal logic like `do A until B.` Moreover, the DSL supports only a limited range of spatial and dynamic relationships and lacks coverage for the various verbs used in everyday language.
However, although rudimentary, our proposed DSL does not need to be "complete" as natural language. Notice that it serves as an auxiliary loss function for completing the correct trajectory, and the lack of certain DSL primitives does not make coming up with a trajectory infeasible but only more difficult. This difficulty can be overcome elegantly by further decomposing the task into sub-tasks, simply by providing more keyframes to guide the trajectory step by step, effectively making a motivated user always possible to solve the task with a physics backend.
Moreover, it is possible to refine our DSL for a wider range of scenarios. An interesting approach is to conduct a more thorough user study as in [1], which can help us identify common elements and distill valuable insights to create a more refined DSL, ensuring that our DSL remains relevant and tailored to the needs of our users and the tasks they encounter.
[1] Acquaviva, Sam, et al. "Communicating natural programs to humans and machines." Advances in Neural Information Processing Systems 35 (2022): 3731-3743.
> Were there any simulation failure cases? How challenging was simulator tuning for different tasks?
Our simulator is constructed using the MPM algorithm from PlasticineLab, which prioritizes time efficiency over simulation accuracy. Consequently, a frequent issue observed is that when the soft body moves at high speeds, it may penetrate through the rigid actuators due to the lower MPM resolution used for fast simulations. However, we did not yet fine-tune the simulator for specific tasks to keep the simulator tuning simple.
> It would be good to have more info about time constraints - simulation and training time for different tasks, not only training progress per epoch.
For a single-stage task, it evenly takes 10 minutes for 300 training steps on a machine with NVIDIA GeForce RTX 2080 Super, which contains 80 steps in the simulation. For most tasks, 300 steps in training is sufficient. We will include a task-level task time analysis in our revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. All my questions and concerns were addressed. I'll keep my score unchanged, and I think it's a solid and interesting work. | Summary: This paper demonstrates curating a set of 100 soft-body manipulation tasks and provides expert policies for them by using a mix of: annotators that provide supervision in the form of keyframes and/or natural langauge annotation, translating the annotations into programs via an LLM, and solving an optimization.
Strengths: One strength of this paper is that obtaining manipulation policies for soft bodies is undoubtedly hard, and so anything that can do is has some potential value to the community.
Additionally, the dataset if released by the authors could be valuable. The tasks and expert policies could be used as data to study methods that require some amount of expert data (imitation / offline RL / etc), and it's hard to both get these types of policies, as well as build simulation environments, in order to study these well. Good datatsets and tasks can be considered a bottleneck of manipulation research, especially with soft body manipulation, and the dataset could be valuable here.
Weaknesses: The first sentence of this paper has unfinished citations. It reads, verbatim: "This paper focuses on soft body manipulation, a research topic with a wide set of applications such as folding cloth[cite], untangling cables [cite], and cooking foods[cite]." This is a pity because it makes it hard to take the work too seriously given an incomplete first sentence. Meanwhile, the rest of the paper seems relatively polished, so I'm doing my best to evaluate it seriously.
One limitation that should be kept in mind is that in multiple ways I only see the methods of this paper as being relevant to simulation, and not directly to the real world. This applies both to the 3D GUI annotation process, as well as the process for converting into something that the differentiable physics solver can solve. The work could help enable further studies in simulation that could eventually impact the real-world, indirectly either through helping provide evaluation tools or perhaps eventually sim-to-real, but not directly.
Overall, I think the main potential interesting thing is that the authors have actually provided expert policies for 100 or so soft-body manipulation tasks. This is hard to achieve. I personally find that the existence of the policies and tasks is the main valuable and interesting thing, and less so the methods used to obtain them.
Additionally, 100 tasks is not that large for a crowdsourced effort. This is an important point in my opinion because it's likely the process might have just gone faster if the first author designed all the tasks themselves rather than teaching others to do so. One might need to get to 1,000 or 10,000 tasks in order to really start to see the benefits of a crowdsourced effort. Accordingly this in my opinion undermines the care taken in the paper to make the annotation process usable by non-experts.
Another lens to look at this paper is: what actual results are shown? There are two results tables/figures: Table 1, and Figure 7. Table 1 shows that it is hard for SAC or PPO to solve the tasks, demonstrating that they are indeed not easy. It also shows that their method, which has access to considerably more info than SAC/PPO due to its use of the annotations, can solve the tasks. This is not a fully fair comparison, and that's fine. The second results table shows that the multi-stage annotations and optimizations are useful. This also makes sense. It is hard to pinpoint what specific contribution claim this results figure supports though, since I don't think there is clear novelty in the multi-step framework presented by the authors.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do each of the points discussed in the weaknesses make sense?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do discuss limitations in the conclusion. They should add that the methods are limited to simulation and impact on the real-world is left as a future question to address.
They don't discuss societal impacts but I think that's okay here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful evaluation. We genuinely appreciate your recognition of the significance of our soft body manipulation dataset and the challenges it presents. Your criticism is invaluable to us, and we would be more than happy to discuss your concerns in greater detail.
> I personally find that the existence of the policies and tasks is the main valuable and interesting thing, and less so the methods used to obtain them.
We want to emphasize that our contribution goes beyond just the dataset itself. Our work highlights that not only the dataset itself but also the task representation and methodology employed in creating the dataset play a crucial role in robot learning. With our approach, other researchers will be able to efficiently curate their own datasets in a similar manner, beyond what we have already collected.
- We advocate the benefits of using crowd-sourcing for task collection, as opposed to conventional methods like gathering demonstration trajectories. The latter "requires non-trivial human efforts, is limited to robots with human-like morphologies, and may be challenging for collecting tasks that are non-trivial for humans." On the other hand, it also helps to avoid the tedious reward engineering and pave the path toward a large-scale dataset with diverse tasks.
- We use a large language model to compile instructions for rewards, setting us apart from recent studies that use LLMs to generate high-level plans. This approach is not only more straightforward for the LLM but also allows us to leverage existing solvers for precise control over soft bodies, offering finer-grained manipulation capabilities.
- Our approach introduces a novel way of representing tasks by combining vision and language, enabling the creation of a differentiable program capable of describing multi-stage temporal procedures. It simplifies trajectory annotation and seamlessly integrates with trajectory optimization; introducing the vision subgoal allows us to express complex deformations of soft bodies.
- We also want to emphasize the importance and the potential of the GUI tool. It facilitates more straightforward communication between the robot and human annotators, enhancing the overall annotation process. The current GUI tool has some room for improvement -- for instance, using a 3D mouse to ease the manipulation of 3D objects.
Through the tables/figures, we illustrated that "considerably more info" and "the multi-stage annotations and optimization" can solve the tasks, **which evidences the importance of suitable annotation and the design of the annotation tools**. This is why we designed the tool to help annotators gather and use that information to solve the tasks.
> being relevant to simulation; they should add that the methods are limited to simulation and impact on the real-world is left as a future question to address.
We acknowledge the gap between the simulator we used and real-world environments, yet we want to emphasize that we focus on both automatic tasks and policy generation. The goal is to develop a method to scale up the demonstration collection process and enable robots to master more diverse skills. Transferring the policies found by the differentiable simulator to the real world~[1] is important but orthogonal to our current study. Moreover, we believe that various components of our framework can offer valuable contributions to real-world scenarios: one such contribution lies in our vision language representation, which naturally represents tasks in the real world; leveraging large language models aids in generating rewards for evaluating real-world results; our GUI tool can effectively manipulate real-world materials, particularly in constrained tasks, serving as a direct interface between humans and robot controllers. Thus, we consider real-world experiments compatible with our approach instead of the limitation of the existing method.
[1] Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
> One might need to get to 1,000 or 10,000 tasks in order to really start to see the benefits of a crowdsourced effort. It's likely the process might have just gone faster if the first author designed all the tasks themselves
We respectfully push back your point. Our framework has already benefited us by creating DiffVL100, the largest soft body manipulation task set so far. Notice how challenging it is for a single author to create 100 tasks, which may take $100\times 0.5=50$ hours, while involving several students, e.g., 10, in parallelizing the collection process can finish the time in one day ($50/10=5$ hours) and capable of generating a broader range of diverse tasks. Thus, we believe that crowdsourcing is vital in achieving this goal. We have taken the first step to demonstrate how non-experts can contribute to scaling up robot learning, which may inspire the field to explore ways of involving individuals from various backgrounds in expanding robot manipulation capabilities. This direction is clearly promising and not constrained by the number of tasks. Scaling up to 1000 tasks requires in total $1000\times 0.5=500$ hours. It would be out of the capabilities of our group but can be readily adopted by other groups with better resources eventually.
Once again, thank you for taking the time to review our manuscript. We will polish it further and include the discussions above. We're eager to discuss and address any concerns you may have.
---
Rebuttal 2:
Comment: Dear reviewer,
Thanks again for your suggestion to strengthen this work. As the rebuttal period is ending soon, we wonder if our response answers your questions and addresses your concerns. If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback
Best,
Authors
---
Rebuttal Comment 2.1:
Comment: Dear reviewer kids,
Thanks again for your suggestion to strengthen this work. As the rebuttal period is ending soon, we wonder if our response answers your questions and addresses your concerns. If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!
We are looking forward to your feedback!
Best,
Authors
---
Reply to Comment 2.1.1:
Title: Rebuttal ends tomorrow!
Comment: Dear reviewer,
Thanks again for your suggestion to strengthen this work. As the rebuttal period is ending tomorrow, we wonder if our response answers your questions and addresses your concerns. If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback
Best,
Authors | Rebuttal 1:
Rebuttal: # General response
We thank all reviewers and ACs for their time and effort in reviewing the paper. We are glad that the reviewers generally recognized the following contributions.
**Problem and dataset** The paper curated a hard and valuable dataset (`kids`, `xC3q`), which is beneficial to the broader community and greatly aids studies related to deformable object manipulation (`EcLH`).
**Method** The paper proposes a novel framework combining LLM and a differentiable simulator(`Scwt`). The multi-stage vision-language representation is innovative (`xC3q`), interesting (`C43D`) and unique (`EcLH`). The overall framework is reasonable and technically sound (`C43D`). It is smart to leverage LLM(`C43D`), and the reward signal is valuable(`EcLH`).
**Tool development** Intuitive interface (`Scwt`) and GUI that enhances the user experience (`xC3q`).
**Experiments** The paper achieved a good empirical success rate (`Scwt`) and provided a thorough ablation study (`xC3q`).
**Presentation** The writing is clear (`Scwt`). The paper is organized (`EcLH`) and easy to read (`C43D`).
We have carefully considered all the questions and concerns raised by the reviewers and provided a detailed response to each reviewer's questions and concerns. We have polished our manuscripts and fixed typos.
We hope our responses have convincingly addressed all reviewers’ concerns. We thank all reviewers’ time and efforts again! Please don’t hesitate to let us know of any additional comments on the manuscript or the changes.
Best Regards,
Authors | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes DiffVL, a novel framework that tackles soft-body manipulation, which consists of a GUI for users to specify tasks easily and a large language model (LLM) for translating text instructions to programs for policy learning and execution.
Strengths: - An intuitive user interface for task specification
- A novel combination of LLM and a differentiable simulator for program generation and optimization
- Good empirical success rate
- Clear writing
Weaknesses: - Heavy user input needed on the magnitude of hours to specify task
- Comparison in Table 1 is a bit unfair since other pure RL baselines do not leverage language guidance
- Writing is not polished (e.g., missing citations in line 21)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: How does this approach differ from other recent robot learning papers utilizing LLM for instruction generation/high-level planning? The novelty in this aspect is obscure to me.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed limitations adequately in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We genuinely thank you for your thorough assessment and encouraging remarks regarding our manuscript. We are pleased to learn that you appreciated the intuitiveness of our user interface and the novelty of our approach to integrating LLM. Additionally, we are grateful for your recognition of our empirical study and the clarity of our writing.
> Heavy user input needed on the magnitude of hours to specify task
Thank you for bringing this issue to our attention. We acknowledge that minimizing human efforts is definitely an important research direction. However, most of the slowdowns can be significantly improved given more time and attention to details in the UI -- which is the realm of Human–computer interaction (HCI). For example, using advanced tools such as Augmented Reality, 3D mice, and even language-based instructions may accelerate the process. Our system can be much better with more attention to detail on the HCI front. However, we respectfully argue that while a better UI is relevant, it isn't the focus of our core contribution. Our current workflow, involving keyframes and natural language, serves as a guiding example for constructing UI systems that generate intuitive and machine-learnable data. This bridges the realms of HCI and robot learning and may inspire future research.
> Comparison in Table 1 is a bit unfair since other pure RL baselines do not leverage language guidance
Although we agree that it is worth integrating language into RL baselines, we would like to clarify that our RL baselines are set up in a very similar manner to the `-Optimize` approach. In both cases, we sample good actuator initialization poses and optimize policies with the sole objective of minimizing the shape distance. This setup ensures a fair comparison between our RL baselines and the `-Optimize` method. However, our RL baselines show significantly worse performance compared to the trajectory optimization baseline, which benefits from an analytical world model for optimization. This outcome is reasonable, given that previous work~[1] has already demonstrated the superiority of model-based trajectory optimization. As a result, we have focused our attention on conducting language-based experiments mainly on the optimization baseline, allowing us to concentrate on exploring the potential of our language-based approach.
In addition, integrating language into RL baselines may entail non-trivial challenges. For example, many reward functions in our optimization program have temporal aspects, making the original state non-Markovian. As we focus on enhancing the optimization baseline through language-based experiments, we've chosen to defer tackling the complexities of language-conditioned RL to future research. We'll elaborate on this in the revised manuscript.
[1] Huang, Zhiao, et al. "Plasticinelab: A soft-body manipulation benchmark with differentiable physics." arXiv preprint arXiv:2104.03311 (2021).
> differ from other recent robot learning papers
We acknowledge that several papers have emerged recently exploring the application of LLM in robots. However, at the time of submitting our paper, our method distinguished itself from other approaches in several key aspects. We would like to highlight the following points that make our work novel:
- First, our method distinguishes itself from other language-conditioned policies by focusing on translating natural language into rewards rather than generating a direct plan, instructions, or actions, as in previous approaches such as R11, PALM-E, and CLIPort. The fundamental concept underlying our approach is that verifying if a generated trajectory accomplishes the task is inherently simpler than generating the right plan. Consequently, our method only requires the LLM to perform the task of compiling a reward program, which is considerably simpler compared to generating a plan, which may exceed the capabilities of the LLM.
- Secondly, the ability to control low-level physics emerges after integrating a model-based trajectory optimizer, which differs from most existing work that mainly works on high-level actions like picking certain objects.
- Thirdly, our research specifically targets soft body problems, whereas the majority of existing work primarily focuses on rigid bodies. Softbody manipulation is a unique domain where our approach of using both vision and language guidance as a way of communicating a task is best suited. In comparison, in a rigid body setting, a language-only task definition such as "put the cup on the table" might be already sufficient.
- Lastly, it is crucial to emphasize that our framework has a distinctive objective of collecting tasks as data, which sets it apart from existing work that assumes the existence of pre-defined tasks. This novel formulation enables the injection of human feedback into the trajectory generation process, providing a more interactive and adaptive approach. By incorporating human feedback and iteratively refining the system, we aim to create a more robust and flexible framework that can adapt to a wide range of scenarios and tasks.
> Writing is not polished (e.g., missing citations in line 21)
Thanks for pointing out the issue. We will publish it carefully.
We hope our explanation helps alleviate your concerns. If you have any further questions, please do not hesitate to ask.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I appreciate the detailed response. My concerns are addressed well. | null | null | null | null | null | null |
Semantic HELM: A Human-Readable Memory for Reinforcement Learning | Accept (poster) | Summary: This paper addresses the problem of partial observability by proposing a semantic-enhanced method. This approach converts environmental observations into human-readable language tokens and incorporates them into the hidden state embeddings. Relative to other methods, this technique exhibits resilience against representation collapse and is sufficiently explainable, making it viable for high-stakes domains.
Strengths: (1) The presented method is concise and straightforward to implement. Moreover, its potential for broad applicability across diverse domains stems from the inherent generalizability of the foundational model (CLIP).
(2) The paper provides a comprehensive review of foundational knowledge, enhancing its readability for varied audiences.
(3) The extensive experiments conducted show the effectiveness of the Semantic HELM approach.
Weaknesses: The proposed method, which embeds a subset of predefined tokens as semantic descriptions of each observation frame, appears to be a rudimentary solution to the partial observability problem. Put simply, it may only be suitable for simplistic environments that rely on recognizing specific objects. In comparison to a basic model that labels the current frame and concatenates the observation with its labels—thus serving as input to a memory-based policy—I don't see this method will stand out.
If the paper aims to show that a language-based memory adequately serves as policy memory, it would be more beneficial to delve into the specifics of description design rather than utilizing a sequence of independent objects. The paper should also engage with environments where past events significantly influence current decisions. I further question whether a singular language-modality memory is sufficient for more complex environments. For example, in three-dimensional video games like Minecraft, the absence of a visual-modality context may lead to inconsistent behavior sequences.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As stated in the weakness part.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper contains the limitation part including the modality gap and captions diversity.
I believe the author should dive more into the caption design of history observations for more complicated environments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and address the following concerns:
**Rudimentary approach:**
We agree with the reviewer that our approach is simple and builds on existing works. However, we disagree that SHELM is only suitable for simplistic environments. Our semantic memory performs particularly well for environments that contain real-world objects, as in Psychlab (Figure 5), where we can exploit the strength of the large-scale pre-training of CLIP. We strongly believe that agents deployed in the real world require a form of semantic memory to efficiently interact with their surroundings. Also, we have added results for Dreamerv3 on Psychlab. SHELM significantly outperforms Dreamerv3 there. Further, the description of a “basic approach that labels the current frame and concatenates the observation with its labels'' somewhat describes what we propose with SHELM.
If the reviewer is aware of existing works that we missed and are similar, we kindly ask the reviewer to make us aware of them.
**Choice of environments:**
Our MiniGrid-Memory and Psychlab environments are unsolvable without a memory component, thus past events significantly influence future decisions. While in the MiniGrid-Memory environment this is limited to the final decision in the POMDP, this effect is heavily exacerbated in the continuous recognition task of Psychlab. Each observed object heavily impacts future decisions.
**Should memory be solely based on language?**
There is an increasing body of literature that a memory based on language can achieve impressive results on three-dimensional video games, e.g., VirtualHome [1], or Minecraft [2]. SHELM is different to these methods by providing a memory in the form of language for visual environments.
[1] Pre-Trained Language Models for Interactive Decision-Making, Li et al., NeurIPS 2022
[2] Voyager: An Open-Ended Embodied Agent with Large Language Models, Wang et al., arXiv/2305.16291, 2023
**Description design:**
We agree with the reviewer that this is a particularly interesting aspect. We showed that a compression in the form of single tokens is already sufficient to solve complex tasks such as our continuous recognition task (Figure 5). Considering more extensive descriptions, however, poses an additional problem of required context length, especially if each observation is to be stored in the memory. This poses an interesting problem for future work. | Summary: This work introduces a a memory mechanism for RL agents that relies on foundation models but does not require training for correct functioning. The main contribution of this memory architecture is that memory tokens are preserved in a "natural language" space, making easier for humans to interpret the decision process of the agent
Strengths: This work addresses long-term memory and interpretability, two challenging yet core research areas in the RL field.
Authors provide a very deep and broad empirical evaluation and analysis of their method and the baselines, highlighting clearly why each of them robust or weak on the different tests. The fact that the conclusions are drawn over 4 very different environments with 30 independent runs makes the reader confident on the results obtained. The ablation study included also reinforces the results.
The work does also a broad coverage of the existing literature and related works
Weaknesses: The main weakness of this work and the reason why I cannot give a stronger mark with the current version is the clarity of the method and it's differences with HELMv2 which this work is heavily based on. Specifically I would extend section 2.2:
* I would expand figure 2, where the method is explained, to highlight what are the key pieces that change from HELMv2. The paper does refer to HELMv2 and the main differences at a high level, but I think that including a depiction would greatly help the readers as I had to go several times through 2.1 and 2.2 to understand the differences
* Lines 100-101, you state "Instead of merely fitting the statistics of the respective embedding spaces we introduce
101 a token-retrieval mechanism that preserves the semantics extracted by CLIP" I think this requires detailed explaining
* Lines 108-113 the steps are clearly explained but I miss the "why" are the authors introducing those changes, what each of them is trying to achieve
* Line 119, you were explaining memory compression level then follow with "by performing this procedure at every time step, we build up a human-readable representation of the history of the current episode" I would reiterate here what is making thins human readable (the representation, not the compression) and again contrast it with HELM.
To allocate space for the depictions and explanations I mention above I would merge parts of the experimental discussion or move some pieces to the appendix.
I also think that authors give a lot of weight on interpretability of the method but then the work lacks doing a proper analysis of it as I note in the limitations box below
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I didn't see mention of it in the paper but authors provide the code in the suplemental material, is there intention to publish it whit the paper?
------
Post rebuttal
I believe authors have correctly addressed my concerns and I am updating my score
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Authors address some limitation at the end of the work but I missed some further discussion and study on the interpretability of the method and it's limits, specially since authors state interpretability as one of the core goals of the work. Specifically, I miss authors including some in-depth analysis on how interpretable the decision making really is rather than relying so heavily in the assumption that it is interpretable because language is used in the memory stored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and adopted the following changes:
We have updated Figure 2 of the paper to make the differences to HELMv2 more explicit (see Figure 1 in accompanying one-page pdf). Along with the updated figure we elaborate on the following points more in-depth:
**Clarification on Line 100:**
We replace the batchnorm component of HELMv2 with a token-retrieval mechanism. The batchnorm component simply adapts the statistics of the CLIP embeddings ($\mu_{\text{CLIP}}$, $\sigma_{\text{CLIP}}$) to those of the LM embedding space ($\mu_\boldsymbol{E}$, $\sigma_\boldsymbol{E}$). Specifically, the resulting CLIP embeddings are first centered by $\mu_{\text{CLIP}}$ and scaled by $\sigma_{\text{CLIP}}$, before being re-scaled by $\sigma_\boldsymbol{E}$ and shifted by $\mu_\boldsymbol{E}$. Thereby, the resulting embedding lives in the pretrained token embedding space of the TrXL. This process, however, does not transfer any semantics between the respective embedding spaces. We can sidestep this issue by first retrieving tokens similar to an observation in CLIP space and subsequently providing these tokens in text form to the LM.
**Clarification on line 108-113:**
- The overlap of the CLIP and TrXL vocabularies is determined to (i) avoid loss of information due to different tokenizers used by CLIP and TrXL, and (ii), to control for the number of tokens TrXL receives per observation.
- CLIP was pre-trained on image-caption pairs and prompts have shown to significantly enhance retrieval performance. Following the original ImageNet prompting procedure of [1] (https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb) we augment each token in the vocabulary overlap with a set of prompts and represent each token as the mean over its prompt-augmented embeddings.
- We will also clarify in Section 3.1 that we did not consistently observe improvements in token retrieval. For example in the MiniGrid and MiniWorld environments the prompts did not substantially change the set of retrieved tokens. On Avalon and DMLab, however, handcrafted prompts lead to a less noisy retrieval.
**Clarification on line 119:**
Since visual observations are represented in the form of language tokens in the memory, they can be interpreted by humans. While HELM and HELMv2 also leverage the latent structure of a pre-trained LM, they do not explicitly represent the observations in the form of text, thus, do not provide a human-readable form of past observations.
**Question on Code:**
Yes, we have already added our code in the supplementary material during the submission and will make it publicly available.
We hope these changes clarify the points raised by the reviewer. We will update those in the revised version of our manuscript. Further, we will merge the second and third paragraph in the introduction and tone down claims w.r.t. interpretability such as “interpretable decision-making”, or “improved user-trust”. Also, we will put into perspective that the enhanced interpretability aspect is enabled by representing past observations in the form of language and added intuitive examples on how and why this is useful, e.g., enhanced debugging of modular architectures in RL for developers. This will provide some space to accommodate a more extensive elaboration on methodology. Finally, we will devote a new section to an extensive discussion about limitations of SHELM.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I want to thank the authors for their response and further clarifications. I believe the additions further clarify the contributions of this work, I am updating my score accordingly | Summary: This study presents Semantic History-Embedded Language Model (SHELM), a new interpretable memory mechanism for reinforcement learning (RL) agents in partially observable scenarios. Current memory methods are often uninterpretable for humans, which hampers their use in critical areas like autonomous driving or healthcare. SHELM addresses this by using human language to clarify the decision-making process. It employs CLIP to convert visual data into language tokens and a pre-trained language model to create an interpretable past record. Despite limitations in synthetic environments, SHELM performs well in photorealistic settings and shows robustness to environmental disturbances.
Strengths: The paper's key strength is its development of the Semantic History-Embedded Language Model (SHELM), an innovative memory mechanism that brings improved interpretability to reinforcement learning. This is crucial for deploying RL agents in high-stake applications where human understanding of the decision-making process is vital. SHELM's ability to link visual inputs with language tokens makes the agent's memory easily understandable for humans. Moreover, the method demonstrates state-of-the-art performance in photorealistic environments and superior robustness to environmental noise compared to existing approaches. In addition, the narrative of the paper is very clean and clear, making it easy to follow.
Weaknesses: - There are minor typos in the paper that should be corrected for revision. E.g. in line 56: "a interpretable" -> "an interpretable"
- In the last paragraph of the Introduction, there is a claim saying: "On Avalon we find that a semantic memory does not yield any benefits. In fact, we observe that agents with memory tend to learn faster, but reach lower final performance than a markovian baseline." I am wondering if there is any explanation for this. I also could not find where it has been discussed in the Experiments sections. Could you please provide a pointer for this?
- How can we ensure that the actor-critic head does not ignore the features passed to it from the TrXL? This means that the agent may make decisions solely based on features passed to it from the CNN module. Even if this happens at very few decision steps, then this could lead to serious questions about the idea of the paper and the extracted insights.
- This work endeavors to mitigate the black-box nature of deep RL agents through the utilization of a black-box tool (LM). I would use the metaphorical concept of "fighting fire with fire" to describe this paper. As discussed in the paper, the ultimate aspiration is to ensure interpretability throughout the decision-making process when applying such methodologies in practical scenarios. However, the lack of interpretability and transparency in the LM raises profound doubts regarding the fundamental premise of the paper.
- From Figure 3, SHELM is only outperforming baselines on the MiniGrid Memory task. On other tasks, Either HELMv2 or Dreamerv2 is better. Is this a correct observation?
- In line 145, there are references to Figures 8 and 9. But these figures are not in the main paper. I would suggest moving the discussions of those figures to the appendix where they are.
- From the results of the paper, they are not as convincing as promised in the abstract. Could you please clarify this?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and address the mentioned weaknesses as follows:
- Thank you for pointing out the typos, we will correct them.
- Thank you for pointing out the confusion about the Avalon results. Indeed this claim is wrong, since we observed performance on-par for Dreamerv2 and HELM with PPO. However, after 10M timesteps there is a significant improvement for HELMv2 and SHELM over PPO. Thus, the addition of memory can facilitate sample efficiency on Avalon.
- If the task is unsolvable without a memory component, then the policy must rely on the history, which is the case for MiniGrid-Memory and Psychlab environments. While for MiniGrid-Memory only the final decision is dependent on the memory, the dependency on memory is very frequent, i.e. approx. every 5 steps for episodes of length > 300. We will point this out in our revised version.
- We agree with the reviewer that the downstream policy itself remains a black-box, but we would like to point out that we explicitly specified that our work only focuses on enhancing interpretability in the memory module in line 326-327. However, we agree with the reviewer that some of our claims about interpretability were a bit unfortunate. Therefore, we have revised the claims w.r.t. interpretability of the decision-making process and user-trust. Specifically, we will rewrite the second paragraph in the introduction and tone down claims in the abstract to clarify that the enhanced interpretability aspect is enabled by representing past observations in the form of human-readable language. Further we will add intuitive examples on how and why this is useful, e.g., enhanced debugging of modular architectures in RL for developers.
- Considering only Figure 3, then yes, SHELM only significantly outperforms the competitors in the MiniGrid-Memory environment. This is due to the fact that the other environments are solvable by a memory-less baseline (which we add in the revised version). Further, on Avalon SHELM reaches the highest performance after 10M interaction steps and on Psychlab, which presents the highest dependency on memory, SHELM significantly outperforms competitors. We will add a more extensive discussion about our experimental results and limitations in our revised version.
- Thank you for pointing out the references to figures from the appendix. We will move the discussion about these results accordingly.
- We have claimed in the abstract: “Our memory mechanism achieves state-of-the-art performance in photorealistic environments where memorizing the past is crucial to solve tasks.” Indeed, this is a bit misleading. While SHELM achieves SOTA performance on MiniGrid-Memory, we are unable to compare to prior SOTA methods on Psychalb since those report human normalized scores and didn’t publish either human scores for normalization nor the codebase. Also, we exclude Avalon from the claim since our experiments have shown that memory does not appear to be crucial to solve tasks there, as our experimental results show (memory-less based agent performs best). So we rephrased this claim to: “On a photorealistic continuous recognition task where memorizing the past is crucial, our memory mechanism converges two orders of magnitude faster than prior methods.” | Summary: This paper introduces Semantic HELM, a memory mechanism for encoding an agent's past observations into a semantically meaningful space and recalling from the memory. It is an extension/modification of HELM, a previous work that uses a language model to encode past observations, but unlike prior methods, Semantic HELM's tokens are interpretable (i.e. semantically related to the observations) rather than random. The paper shows comparisons to prior methods on four environments, including ones requiring strong memory to complete successfully.
Strengths: I think the paper is extremely thorough in its experiments. It evaluates on four different environments and against reasonable baselines such as the previous HELM works and Dreamer. It even shows experiments where it is about par with other methods rather than cherrypicking environments where theirs does better.
On the Memory and PsychLab environments, which seem the most explicit about requiring memory, this method does perform extremely well compared to baselines. The method does have some natural advantage on PsychLab because it has the benefit of the knowledge from CLIP, but HELMv2 also has CLIP encodings, so this isn't an unfair advantage.
The idea is interesting and makes sense. I think the motivation for having a memory that is (somewhat) interpretable is pretty strong.
The ablation studies are really interesting and alway very useful and careful.
Weaknesses: The interpretability claims should I think be somewhat caveated in the paper. As the authors acknowledge, SHELM can in fact end up learning associations that are not quite semantically correct (as stated in line 224). I get what the authors are saying here that overall it is much more aligned semantically than prior methods, but I think some of the statements about interpretability in the intro should be somewhat hedged.
I did not quite understand from the paper how the vocab was determined and it seems potentially limiting. Reading section 2.2, it looks like it finds the token overlap between the LM and CLIP and appends a custom prompt. First, does this imply that you cannot encode anything longer than a single language token (not including the prompt?) That seems like a big limitation if true. Also, how are the prompts decided in each environment (is there a table somewhere)? This is some additional information that needs to be hand defined per envionment.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See above for questions about vocab.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: See above - possibly some limitations about the vocab that I think the paper could address better.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the positive feedback from the reviewer.
**Weaknesses:**
- We agree with the reviewer that some of our claims about interpretability were a bit unfortunate. Therefore, we have revised the claims w.r.t. interpretability of the decision-making process and user-trust. Specifically, we will rewrite the second paragraph in the introduction as well as tone down claims in the abstract to clarify that the enhanced interpretability aspect is enabled by representing past observations in the form of human-readable language. Further we will add intuitive examples on how and why this is useful, e.g., enhanced debugging of modular architectures in RL for developers.
- For now, we only encode observations into a single token or a set of tokens. We agree, this is a limitation that we will make more explicit in our revised version. Table 1 in the appendix shows environment specific prompts and appendix B elaborates on the prompt selection strategy. We only used prompts for Psychlab and Avalon, since it did not seem to qualitatively improve retrieval. We also want to highlight that these prompts are not necessary, but useful since they make the retrieval process less noisy.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and comments.
I am still quite positive on the paper, so will keep my rating at Accept. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the time they invested to give constructive feedback which helps us to greatly improve our manuscript.
We are glad the reviewers found our paper clear and well-written (xnPR, FbDD), relevant (xnPR), found our empirical analysis thorough (a9Pk, f2kZ) and our method straightforward to implement (RcPE).
To address the weaknesses pointed out to us we have made the following changes to our manuscript:
- We have toned down claims w.r.t. interpretability of the decision-making process, user-trust, and high-stake environments. Further, we add a motivation to our approach and clarify that enhanced interpretability is caused by the fact that we represent past observation in the form of language. Also, we provide the reader with intuitive examples on why and how memory represented in written language may be useful.
- We devote a section to limitations in the main paper and make a more in-depth analysis on results and potential shortcomings in terms of vocabulary, prompt design, and interpretability.
- We extend our methodology section and figure 2 (see accompanying one-page pdf) to make the difference to HELMv2 more explicit and provide the reader with a deeper understanding of the methodology of SHELM. We have also updated ill-defined terminology such as “Markovian baseline”.
- We provide additional results (see accompanying pdf) for
- Memory-less PPO on MiniGrid and MiniWorld
- Dreamerv2 on Avalon
- Preliminary results on Dreamerv3 on MiniGrid and Psychlab
- Preliminary ablation study for different vision encoders
- BLIP-2 qualitative results for captioning of observations
Thereby, we have addressed the raised weaknesses by the reviewers and would gladly answer additional questions about our manuscript.
Pdf: /pdf/6c2701501a42bfec3eba1e51670a3b05cd6d9d7a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper looks at using a compressed representation of recent observations in the form of top-k CLIP-extracted text entities. They look at training policies in a few different simulated environments with this representation. They cast this as an extension of the prior "HELM" method, although their key idea does not need to necessarily be paired with HELM.
Strengths: The overall discussion, paper writing, introduction, and experimentation, all seem pretty "solid". The authors seem to be devoted to a genuine scientific exploration and they build rapport with the reader such that they believe the experimentation and investigation is earnest.
Weaknesses: There are a couple main limitations in mind for me of the overall method:
1) In the first place, the entire reason used to motivate the investigation was to add interpretability. However, I'm not sure that a small list of short words, as extracted by CLIP, is a huge value-add for interpretability, since the downstream policy is still black-box. It is possible that the downstream black-box part of the policy model learns correlations with the text features that are not as a human would interpret them to be. This limitation is not really investigated empirically. Instead, after the introduction, all of the results are about performance on the tasks and not interpretability of the performance on the tasks.
2) Another huge limitation that is not discussed is that the proposed method would only be expected to work well in environments where there is a "single or few things" observable at a time in an image, and the mere existence of these objects is all the downstream policy needs to know. The reason is that CLIP can be used okay as a zero-shot object detector, but it is not good at fine-grained understanding of environments. For example, robot manipulation environments would not likely be well helped by the method.
I consider the above 2 to be primary limitations of the method and my recommendation is primarily driven by those 2. Here are a few other points to consider for the authors as they revise and for future work:
- The term "memory" is repeatedly used here. However, I'm not sure this is the best word choice for the author's method. The authors are not creating any type of persistent memory mechanism. Instead, the method only serves to compress past K observations. The method works by operating on the past K observations, and compressing them all down to the top-k CLIP words. Since nothing persistent is created, it's not necessarily a memory mechanism, I would argue.
- "Markovian baseline" -- I know what Markov means, but I'm not sure exactly what they mean by Markovian baseline. I think they mean that the baseline is memoryless and operates on only the current observation, which would assume that the environment is Markov, even though it's not. But it also could mean that they give the agent full observability so that it is indeed Markov. It's not clear and should be clarified, moreover "Markovian baseline" is likely not the best choice to use as shorthand for it, even though it seems like the authors might have done this before.
- I was about to suggest that an interesting baseline would be to, instead of using the top-k CLIP text entities, to instead just train a policy on the underlying CLIP vision embeddings themselves, but then I realized this seems to essentially be what HELMv2 was.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do each of the discussed weaknesses make sense?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations section in my opinion is incomplete since the authors do not address the primary limitations I pointed out in the "Weaknesses" discussion.
There is no discussion of broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and address the concerns as follows:
**First Weakness:**
We agree with the reviewer that the downstream policy itself remains a black box, but we would like to point out that we explicitly specify that our work only focuses on enhancing interpretability in the memory module (line 326-327).
Further, we believe our method adds value, for (i), developers of RL agents, and (ii), for potential applications in the real world. Developers can perform ad-hoc analysis on token mappings to determine whether semantics can be extracted, while post-hoc analysis allows troubleshooting to investigate if important bits of information were stored in the memory. We provide an example for the latter in appendix C.
Agents interacting in the real world that are equipped with a semantic memory in the form of language can communicate their past findings to an end-user to allow, for example, tracking of lost objects.
Having said that, we agree with the reviewer that some of our claims about interpretability were a bit unfortunate. Therefore, we have revised the claims w.r.t. interpretability of the decision-making process and user-trust. Specifically, we will rewrite the second paragraph in the introduction as well as tone down claims in the abstract to clarify that the enhanced interpretability aspect is enabled by representing past observations in the form of human-readable language. Further we will add intuitive examples on how and why this is useful, e.g., enhanced debugging of modular architectures in RL for developers.
**Second Weakness:**
We respectfully disagree with the reviewer that our semantic memory would not be beneficial for robot manipulation environments. Robots interacting in the real world will require a form of semantic scene understanding. In fact, other works have demonstrated the usefulness of CLIP for robot manipulation tasks, e.g. [1,2].
We agree that a semantic memory alone will not be sufficient for solving fine-grained control tasks. However, our memory is modular and could easily be integrated into existing robotic manipulation pathways. We will elaborate on this in the revised version of our manuscript.
[1] CLIPort: What and Where Pathways for Robotic Manipulation, Shridhar et al., CoRL 2021
[2] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation, Gadre et al., arXiv/2203.10421
**Minor Weaknesses:**
- The “semantic database” of token embeddings we retrieve from is persistent across all environments therefore we believe the terminology of “memory” is accurate. However we will differentiate between the “semantic memory” (i.e. database of tokens to retrieve from) and the “episodic memory”, which is represented by the LM.
- We will change the terminology from “Markovian” to “Memory-less”, which is what our PPO method represents. | Summary: Paper presents a method to augment a feed-forward RL policy with an “interpretable” memory. In general, it is based off GTrXL and HELM, which uses transformer-XL (TrXL) to encode current visual observation (or further map it to some embedding through embedding look-up, CLIP encoder, etc) while paying attention to some past observations (through the memory mechanism in TrXL). Finally, both the output of TrXL (reterived memory) and current visual observation embedding are fed to actor and critic heads. The proposed method replace CLIP embedding with a bottleneck in explicit text, which is claimed to be more “interpretable”. The proposed method is evaluated on several domains again some baselines and ablations.
Strengths: +The paper is generally clear and well-written. Although there seems to be many legacies in this direction (HELM, HELMv2, etc), the authors clearly explains how their method is different from these prior arts, especially on what motivates the proposed changes, making it friendly to readers who might not quite familiar with memory in RL agents.
+The study problem is relevant to the interest of NeurIPS. The use of language model, CLIP and other foundation models in RL should be of interest to many audiences of this conference, which could generate possible impact.
+The proposed method generally makes sense. Instead of sending CLIP embedding directly to TrXL, introducing an explicit text bottleneck does bring more interpretability, and the results does not seem to indicate significant drawbacks from the previous model using implicit embeddings.
+Results on lots of challenging domains (minigrid, MiniWorld, Avalon, PsychLab, etc) with ablations seems to justify the claims well.
Weaknesses: Having said those above, I do have some concerns on baselines and results, listed below:
-Although the authors mentioned four domains, no all baselines (PPO, PPO w/ LSTM, DreamerV2, HELM v1/v2, etc) are tested on all these environments. Ex., there is no results with DreamerV2 on Avalon. Can the authors comment on that? Also, I couldn’t find more justifications on the selection of these baselines – is the main goal of experiments to validate the effectiveness of memory mechanism in RL agents, or simply demonstrate that the proposed memory suites the tasks better? What is the state-of-the-art agent in the four environments? These are the points that needs to be further elaborate to help us understand the significance of the results.
One specific point: since DreamerV3 has been released, is there a certain reason to only include results from its precedent?
-One main contribution made by this paper is to upgrade the CLIP + TrXL memory in HELMv2 to a more “interpretable” version. This raises two issues that do not seem to be addressed in the paper:
1. Replacing the CLIP embedding in HELMv2 with some explicit text indeed introduces a semantic bottleneck in the memory pipeline, as the visual observations are now compressed into even lower bit discrete text instead of dense embeddings. In theory, this could bring some drawbacks on providing information of the past. Some of the results do indicate this, ex. Slower convergence rate on MiniGrid, worse results on MiniGrid-Memory but some do not, ex. PsychLab. Can the authors explain more on how could such differences emerge? In fact, description of each domain could have been more precise to allow more space for analysis. Specifically, analysis on domain-specific impact on convergence and end performances on different memory-based (w/ or w/o language bottleneck) or memory-free agents should be included.
2. Interpretability seems to be a major matter of concern in this paper, however, I couldn’t find any results, metrics, or analysis in the main text dedicated to demonstrating the interpretability of the proposed method (with only an illustration in Figure 1, but I don’t think it count as a result). How interpretable can the proposed memory be? Does it indeed exhibit some interesting interpretable behavior? What is inside the memory when run on these four domains? Compared to baselines, does user trust this system more due to interpretability rather than better results? These are all questions that need to be answered to justify the interpretability of the proposed method.
-Here are some ablations/baselines that could be missing:
- GTrXL (transformer-XL instead of LSTM for RL)
- HELM/SHELM w/ Different CLIP models (RN50, ViT, etc)
- Fine-tune TrXL (it is frozen in SHELM no?)
- Simply concat t-1 observation (CLIP embedding or text) with current visual observation
Minor:
-Which env is used for ablations?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See [Weaknesses]
---
### Post-rebuttal
Thank you for the detailed reply and the additional results. My concerns on baselines and limitations are mostly addressed. I will raise my score to 5. I will consider raising my score even further if the remaining questions can be addressed.
---
Raised my score to 6 after further discussions with the authors.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: To ensure interpretability, the authors have to pre-define a set of sentences, which effectively limit the scope of visual observations that can be stored in the memory. Maybe using a generalist caption model like BLIP-2 could help mitigate this?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and address the raised points as follows:
**Baselines:**
We have **added the missing baselines**, specifically, Dreamerv2 on Avalon (these were originally shown only in Table 3 in the appendix), and PPO on MiniGrid and MiniWorld.
Dreamerv2 is SOTA on Avalon after 50M timesteps [1]. To the best of our knowledge HELMv2 is SOTA on the selected MiniGrid and MiniWorld environments [2].
The only existing results for Psychlab that we are aware of are GTrXL and the results reported in [3]. Unfortunately, they report human normalized scores and didn’t publish either human scores for normalization nor the codebase. Thus, we are unable to directly compare to these approaches. However, SHELM converges within 10M interaction steps whereas both prior methods require billions of interaction steps until convergence. From visual inspection the policy trained with SHELM behaves close to optimal. Results for fine-tuning TrXL and a GTrXL trained with PPO on MiniGrid were reported in [4], where HELM was more sample efficient. Also, no codebase for the original GTrXL implementation using V-MPO is publicly available.
We provide some **preliminary results for Dreamerv3** in the one-page pdf accompanying our general response. SHELM significantly outperforms Dreamerv3 on the continuous recognition task from Psychlab and is less sample efficient than SHELM on one of the MiniGrid environments. We will deliver results for Dreamerv3 on all remaining environments as well.
We also added **preliminary results on an ablation study** where SHELM uses a RN50 CLIP encoder instead of ViT-B16. The results corroborate our analysis in Section 3.1., i.e., that a ViT-based encoder is better suited for SHELM. We will add the final results as well as a discussion on these in our revised version.
**Concatenating the t-1 observation** with the current observation essentially resembles framestacking which is guaranteed to fail on tasks such as MiniGrid-Memory and Psychlab, where memory dependencies span across longer time intervals.
[1] Avalon: A benchmark for RL 379 generalization using procedurally generated worlds, Albrecht et al., NeurIPS 2022
[2] Foundation models for history compression in reinforcement learning, Paischer et.al., NeurIPS FMDM Workshop, 2022.
[3] Generalization of reinforcement learners with working and episodic memory, Fortunato et al., NeurIPS 2019
[4] History Compression via Language Models in Reinforcement Learning, Paischer et al., ICML 2022
**Analysis of results:**
The reviewer states “worse results of SHELM on MiniGrid-Memory”, which we believe to be a mistake as SHELM significantly outperforms all other methods on MiniGrid-Memory.
The inconsistent results on MiniGrid and MiniWorld, however, are due to the fact that these environments are solvable without memory (results from PPO baseline we added on both environments). Memory can be helpful in these environments, e.g., increase sample efficiency, but is ultimately not required to solve the tasks. The only MiniGrid environment that truly requires memory is MiniGrid-Memory, where SHELM significantly outperforms the competitors. We will make this more clear in the revised version.
**Results on interpretability:**
The retrieval examples that we showed resemble examples for what is stored in the memory. Figure 10 in the appendix shows examples for MiniGrid-Memory, and Figure 4 and 5 show examples for Avalon and Psychlab respectively. We will clarify this in our revised version.
We agree with the reviewer that some of our claims about interpretability were a bit unfortunate. Therefore, we have revised the claims w.r.t. interpretability of the decision-making process and user-trust. Specifically, we will rewrite the second paragraph in the introduction as well as tone down claims in the abstract to clarify that the enhanced interpretability aspect is enabled by representing past observations in the form of human-readable language. Further we will add intuitive examples on how and why this is useful, e.g., enhanced debugging of modular architectures in RL for developers.
**Limitations:**
A pre-defined set of sentences/prompts is not required, we could simply use the pre-defined vocabularies, i.e. single tokens and encode those with CLIP. However, in our analysis in Section 3.1 we found that prompts appear to make the token retrievals less noisy. We only use prompts for Avalon and Psychlab (shown in Table 1 in the appendix) though, since we did not observe any improvement for MiniGrid observations in our qualitative analysis. We will make this more explicit in the revised version.
We **added examples for BLIP-2 generated captions** for observations of MiniGrid, Avalon, and Psychlab. BLIP-2 works well for DMlab, but produces rather noisy captions for MiniGrid and Avalon. For now we refrain from using captioning engines since they would drastically increase the required context length and substantially slow down SHELM. However, as mentioned in our conclusion we believe this is a fruitful avenue for future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply and the additional results. My concerns on baselines and limitations are mostly addressed. I will raise my score to 5. For the final version, please make sure to revise the claim on interpretability, include these results (ex. DreamerV3, PPO on MiniGrid/MiniWorld, etc) in the main text, and be more transparent to some cases where the proposed approach can struggle (memory is less useful if my understanding is correct).
some additional questions:
-what does the * mark attached to "PPO" mean in fig. 3 of the response PDF? I do notice the result looks different from what is was in the main paper (it was a bit higher than HELM but not in the response PDF, also its curve is missing as well).
-In fig.4 (main paper), both HELMv2 and SHELM underperform HELM, while HELM is slightly better than PPO, and the conclusion is "These results suggest that the memory component is not beneficial for Avalon". I think HELM also offers a kind of memory, no? Can the authors explain this?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response!
- Indeed, we slightly altered the plot, we have changed the terminology of "Baseline" in the submission to PPO*, which was originally trained in [1]. We did this to make it more clear that this method is memory-less. Since there were only results reported for 50M interaction steps for PPO* and Dreamerv2, we cannot show any learning curve. The remaining curves are equivalent to the ones in the submission, except that we accidentally mixed up the color codes. Sorry for the confusion, of course we will make color codes for each method consistent across the paper.
- Yes, HELM also utilizes a memory, however PPO does not, which indicates that the same performance can be achieved without a memory mechanism. Hence, our interpretation that the addition of memory does not provide benefits for Avalon.
[1] Avalon: A benchmark for RL generalization using procedurally generated worlds, Albrecht et al., NeurIPS 2022 | null | null | null | null |
Graph-Structured Gaussian Processes for Transferable Graph Learning | Accept (poster) | Summary: This work proposes a generic graph-structured Gaussian process framework (GraphGP) to investigating the knowledge transferability between homophilic and heterophilic graphs. GraphGP uses a structure-aware neural network to encode local node representation and global graph representation (domain-level) simultaneously. In addition, a simple neighborhood selection strategy is designed to tackle the knowledge transferability in homophily and heterophily graphs. Experimental results demonstrate superior performance compared to baseline models.
Strengths: (1) Concerning on the problem of transferable graph learning over non-IID graph data is interesting.
(2) The paper is clearly written and well formulated.
(3) This paper theoretically discussed the expressive power of GraphGP and analyze the knowledge transferability across graphs from the perspective of graph domain similarity.
Weaknesses: (1) In the methodology section, the algorithm flow description of GraphGP should be clearer. It is recommended to add some brief descriptions of the algorithm flow, draw the algorithm table, and draw the overall framework figure.
(2) algorithm complexity analysis and convergence analysis should be described more specific.
(3) The theoretical analysis of the algorithm is sufficient but the experiment is weak. In order to verify the generalization of the model, more transfer learning tasks should be designed in the experiment, such as classification tasks.
(4) The experimental part should contain several real-world datasets to verify the effectiveness of the algorithm.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) There are many hyperparameters in the GraphGP algorithm, and whether the performance of the model is sensitive to these hyperparameters?
(2) There are many network structures for graph representation learning in existing research. Why the author uses message-passing graph neural network to extract graph features? Can he be well applied in the task of transfer learning? Can it be well applied to transfer learning tasks?
(3) What are the limitations of the proposed methods?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the insightful comments and valuable suggestions. Hereafter, we present responses addressing the concerns and queries raised by the reviewer.
**Q1:** There are many hyperparameters in the GraphGP algorithm, and whether the performance of the model is sensitive to these hyperparameters?
**A1:** As illustrated in lines 284-292, the hyperparameters involved in the proposed GraphGP algorithm were optimized by maximizing the log marginal likelihood in Eq. (7). In appendix A.7.2, we also reported the learned weights $\alpha_i$ after optimization. It is notable that hyperparameter optimization is commonly used in Gaussian process regression models [46]. The optimized hyperparameters tend to be positively correlated with the marginal likelihood, thus leading to better empirical performance in real scenarios.
**Q2:** There are many network structures for graph representation learning in existing research. Why the author uses message-passing graph neural network to extract graph features? Can he be well applied in the task of transfer learning? Can it be well applied to transfer learning tasks?
**A2:** Message-passing graph neural networks have achieved promising performance in a variety of graph mining tasks. They have also motivated recent transferable graph neural network models, e.g., GRADE [58], AdaGCN [14], EGI [71], etc. Thus, message-passing graph neural networks have been applied to transferable graph learning tasks. Following this direction, in this paper, we explored the theoretical understanding of message-passing graph neural networks on transferable graph learning tasks.
**Q3:** What are the limitations of the proposed methods?
**A3:** The limitations of the proposed methods are discussed in Appendix A.2. In this paper, we focus on the covariate shift assumption. In addition to covariate shift, label shift is another assumption in transfer learning scenarios. It can be challenging to extend the developed transferable graph Gaussian processes to tackle label shift scenarios. Besides, the proposed methods cannot directly handle the test-time adaptation scenarios where only testing samples are available in the target domain.
**Q4:** In the methodology section, the algorithm flow description of GraphGP should be clearer. It is recommended to add some brief descriptions of the algorithm flow, draw the algorithm table, and draw the overall framework figure.
**A4:** First of all, we would like to clarify that we have shown the algorithm details of GraphGP in Algorithm 1 in Appendix A.6. Second, following the reviewer's suggestion, we will provide more descriptions of the algorithm flow and the overall framework figure in the revised version.
**Q5:** algorithm complexity analysis and convergence analysis should be described more specific.
**A5:** We would like to clarify that in Appendix A.6, we did analyze the computational complexity of GraphGP. In addition, we also provided a more computationally efficient approximation of GraphGP and empirically evaluated this approximation approach in Tables 9 and 10.
**Q6:** The theoretical analysis of the algorithm is sufficient but the experiment is weak. In order to verify the generalization of the model, more transfer learning tasks should be designed in the experiment, such as classification tasks.
**A6:** Thank you for acknowledging our theoretical contributions on theoretically studying the knowledge transferability between source and target graphs, which is the main focus of this paper. Here we conduct additional experiments to adapt GraphGP to the classification task. Following previous work [41], we can use the one-hot representation of class labels to set up a multi-output regression problem. Table E reports the results (measured by node classification accuracy on the target graph) of GraphGP on node classification tasks. Here we use the social networks from [r3], where Blog 1 and Blog 2 are two disjoint social networks extracted from BlogCatalog. It further verifies the effectiveness of GraphGP over baselines on cross-network node classification tasks.
| Model | Blog 1 $\to$ Blog 2 | Blog 2 $\to$ Blog 1 |
|---------|-----------------------|-----------------------|
| RBFGP | 0.2113$_{\pm 0.0087}$ | 0.2580$_{\pm 0.0083}$ |
| GINGP | 0.5190$_{\pm 0.0225}$ | 0.5097$_{\pm 0.0230}$ |
| GraphGP | 0.5505$_{\pm 0.0052}$ | 0.5244$_{\pm 0.0162}$ |
Table E: Performance comparison on cross-network node classification tasks
[r3] Shen, Xiao, Quanyu Dai, Fu-lai Chung, Wei Lu, and Kup-Sze Choi. "Adversarial deep network embedding for cross-network node classification." In Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 03, pp. 2991-2999. 2020.
**Q7:** The experimental part should contain several real-world datasets to verify the effectiveness of the algorithm.
**A7:** We would like to clarify that we did use real-world data sets in our experiments. Take the agriculture data set as an example, it is collected by several universities for studying the relationship between diverse traits of plants related to the plant’s growth and the leaf hyperspectral reflectance. More details regarding the data description can be found in Appendix A.7.1.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for providing a detailed rebuttal. Based on the contribution, I would like to keep the rating as borderline accept.
---
Reply to Comment 1.1.1:
Title: Thanks for Response
Comment: Dear Reviewer e15N,
Thank you very much for your response.
Best Regards,
Authors of Submission13750 | Summary: This paper deal with the ttransferable graph learning problem, especially between the homophily and heterophily graphs. To solve this problem, they propose a graph Gaussian process (GraphGP) algorithm, which is derived from a structure-aware neural network encoding both sample-level node representation and domain-level graph representation. The effectiveness of GraphGP is verified both theoretically and experimentally in various transferable node regression tasks.
Strengths: 1. Comprehensive theoretical analysis.
2. Clear problem definition.
3. New techniques to consider the transfer learning between homophily and heterophily graphs.
Weaknesses: 1. This paper has poor organization/presentation. There are two problem formulation and several proposed methods, not clear which is the major contribution. It is also not clear what is the relationship between the different mentioned techniques in this paper.
2. Some comparison experiments are missing.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. In introduction part, the author says “However, most existing works followed the IID assumption”. However, this seems not correct, since graph data are not IID, and the existing transfer learning for graphs should also consider non-IID assumption. Could the author give more explanations on this?
2. In line 52, the author mentions “structure-aware node”. It is better to provide the explanation of what kind of node can be named as “structure-aware”.
3. In line 88, they author claims that “However, most existing works focus on either investigating the transferability of a pre-trained graph neural network”. It is not clear why the current existing transfer learning techniques for pretrained model cannot solve the problem formulated in this paper. This is a very important question to judge the contribution of the proposed techniques.
4. In equation 2, what “i” and “j” refers to? And what is the meaning of “layer width”?
5. In Section 4, the author introduces several methods, including “Structure-Aware Neural Network”. It is not clear if this method is proposed by author? If it is, the novelty is largely limited, since it is just the typical existing message passing network with a different theoretical analysis.
6. The author formalized problems both in Section 3.1 and 4.3. It is confusing that which one is the goal of this paper.
7. This paper only compares the proposed model with Gaussian models, while the existing graph transfer learning techniques are not compared.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's constructive comments. We would like to address the concerns and questions as follows.
**Q1:** More explanations on “most existing works followed the IID assumption”?
**A1:** In the introduction section, we started by introducing several transfer learning approaches, e.g., [3, 16, 39, 69, 8, 53, 24, 40, 65], under IID assumption that samples are independent and identically distributed in each domain (see lines 17-21). Knowledge transferability under IID assumption has been widely studied in past decades. In transfer learning scenarios, IID data (e.g., images for object recognition) are more frequently used compared to non-IID data. However, previous works under IID assumption [3, 16, 39, 69] might fail to handle transferability across domains with non-IID data. Furthermore, lines 46-50 discussed the existing transfer learning techniques for graphs under non-IID assumptions.
**Q2:** Explanation of structure-aware nodes?
**A2:** We would like to clarify that in this case, the "structure-aware input node" indicates the pair of inputs $(v, G)$ given a node $v$ and its associated graph $G$. Given a graph $G$ with $n$ nodes $\{v_1, v_2, \cdots, v_n\}$, the pairs of inputs can then be $\{(v_1, G), (v_2, G), \cdots, (v_n, G)\}$. Line 52 shows that we aim to leverage a structure-aware neural network to build the relationship between $(v_i, G)$ and $y_i$ for all $i=1,2\cdots,n$.
**Q3:** Why existing TL techniques for pretrained model cannot solve the problem?
**A3:** There are two major limitations in existing pre-trained graph neural networks. First, it did not leverage knowledge from source and target graphs to build a unified transfer learning framework. Intuitively, the crucial idea of our GraphGP algorithm is to find common knowledge (e.g., $S \cap T$) shared by source and target graphs via a unified transfer learning framework. In contrast, pre-trained GNNs might follow a two-stage framework: pre-training on source graph and then fine-tuning on target graph. That is, it first leverages the source knowledge (e.g., $S' \subset S$) to build GNN, and then investigates what knowledge (e.g., $S' \cap T$) in the learned source model is shared in the target graph. Thus, the two-stage transfer learning framework might lose some common knowledge compared to our unified framework, i.e., $S' \cap T \subset S \cap T$. Second, there is little theoretical analysis regarding the knowledge transferability induced by pre-trained graph neural networks. Compared to previous works [29, 49, 71], our paper theoretically shows the connection between graph domain similarity and knowledge transferability across domains (see Corollary 4.6 and Figure 3).
**Q4:** In Eq. 2, what “i” and “j” refers to? What is “layer width”?
**A4:** (a) In Eq. (2), $\mu^{(l)}(v|G)$ indicates the feature vector of node $v$ given graph $G$ at the $l$-th layer. Thus, $j$ in $\mu^{(l)}\_j(v|G)$ indicates the $j$-th dimension of node (sample-level) representation vector $\mu^{(l)}(v|G)$. Similarly, $j$ in $\nu^{(l)}\_j(G)$ indicates the $j$-th dimension of graph (domain-level) representation vector $\nu^{(l)}(G)$. $i$ in $f_i^{(l)}(v, G)$ indicates the $i$-th dimension of output feature vector $f^{(l)}(v, G)$ at the $l$-th layer. Furthermore, $\mathbf{W}_{ij}^{(l)}$ indicates the $i$-th row and $j$-th column of weight matrix $\mathbf{W}^{(l)}$ at the $l$-th layer.
(b) Similar to previous work [41], "layer width" indicates the number of neurons in a graph convolutional layer.
**Q5:** The novelty of the methods, e.g., “Structure-Aware Neural Network” in Section 4?
**A5:** As illustrated in Subsection 4.1, we start by proposing the generic structure-aware neural network (see Eq. (2)). The major novelties of this proposed structure-aware neural network are two-fold. First, it incorporates both node (sample-level) representation and graph (domain-level) representation, in order to model source and target graphs for transferable graph learning. Second, as explained in lines 166-167, it is flexible to instantiate this generic structure-aware neural network with any message-passing GNN.
Specifically, in Eqs. (3)(4), we provide simple instantiations of the structure-aware neural network of Eq. (2). As discussed in lines 176-177, similar message-passing strategies have also been considered in previous works [11, 56, 70]. Instead of developing novel message-passing strategies, this paper focuses on building the connections between the structure-aware neural network and the Gaussian process for transferable graph learning in Subsection 4.2. This connection enables us to develop the GraphGP algorithm in Subsection 4.3 and theoretically understand knowledge transferability across graphs in Subsection 4.4.
**Q6:** It is confusing which one is the goal of this paper.
**A6:** We would like to clarify that we provided the generic problem definition of transferable graph learning in Subsection 3.1. The goal is to learn the prediction function on the target graph, using knowledge from the source graph. In Subsection 4.3, we provided more detailed input and output of transferable graph learning on node regression tasks. The goal is still to learn a prediction function (more specifically, node regression function in this case) on the target graph, using knowledge from the source graph. Therefore, the problem definition in Subsection 3.1 is more generic, while in Subsection 4.3, we consider an instantiation of this generic problem setting based on the node regression task. Both of them have the same goal, i.e., learning the prediction function on the target graph using knowledge from the source graph. We will provide more clarification regarding the two problem definitions in the revised version.
**Q7:** The existing graph TL techniques are not compared.
**A7:** We would like to clarify that we did compare our GraphGP algorithm with existing graph transfer learning baselines (e.g., GRADE and AdaGCN) in Table 5.
---
Rebuttal Comment 1.1:
Title: Gentle Reminder
Comment: Dear Reviewer heNY,
We would like to thank you again for your constructive comments and questions on our paper. We have carefully provided our responses to your raised questions and concerns. Please feel free to let us know if you have any other questions or concerns regarding our paper. Thanks for your time and consideration.
Best Regards,
Authors of Submission13750 | Summary: The paper studies transferable graph learning over non-IID graph data. In order to adapt the knowledge from source graphs to target graphs, the paper proposes a graph-structured Gaussian Process (GraphGP). The GraphGP is derived from a structure-aware neural network and due to the flexibility of the hyperparameters, GraphGP is able to transfer knowledge across different types of graphs, such as homophily graphs and heterophily graphs. Experimental results on five datasets show that the proposed GraphGP achieve better performance than Gaussian Process baselines.
Strengths: 1. The motivation of the proposed GraphGP is clear and the idea of using the Gaussian Process to address the graph transfer learning problem is novel and interesting.
2. Theoretical analyses are provided to justify the rationale of the proposed method.
3. Codes are provided for reproducibility.
Weaknesses: 1. It is not very clear from the paper how to implement the kernel function $K^{(L)}$ of GraphGP in the experiments. What kind of kernel function is used in the experiments?
2. The proposed GraphGP can only be used in the regression task. It seems that it is not easy to adapt the model to other graph learning tasks such as the classification task.
3. The paper only uses Gaussian Process models as baselines. Why not compare against some graph neural network models which are considered to be more effective in graph learning tasks?
4. The statistics of experimental datasets are missing. It’s unclear how the proposed GraphGP performs when dealing with graphs of different sizes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the questions in the Weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations and broader impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful reviews and constructive questions about our paper. We appreciate the strengths you highlighted regarding our motivation and theoretical results on transferable graph learning. Here are our answers regarding the concerns.
**Q1:** It is not very clear from the paper how to implement the kernel function $K^{L}$ of GraphGP in the experiments. What kind of kernel function is used in the experiments?
**A1:** The calculation of $K^{(L)}$ is shown in Theorem 4.1. It can be seen that it follows an iterative computation process. More specifically, given base kernel $C^{(0)}$ over node attributes, both sample-level kernel $K^{(1)}\_{\mu}$ and domain-level kernel $K^{(1)}_{\nu}$ at the first layer can be computed based on $C^{(0)}$ and graph structure (i.e., neighbor selection). Thus, the kernel $K^{(1)}$ can be computed. These results can then be leveraged to compute $K^{(2)}$ at the second layer. It will continue this process until the final kernel $K^{(L)}$ is computed. In summary, the kernel function is iteratively computed based on the graph structure and the base kernel $C^{(0)}$ is defined over node attributes. As pointed out in [41], the base kernel $C^{(0)}$ can be any positive-definite kernel, e.g., linear kernel, RBF kernel, polynomial kernel, etc. Following [41], we adopted the RBF kernel for $C^{(0)}$ in the experiments.
**Q2:** The proposed GraphGP can only be used in the regression task. It seems that it is not easy to adapt the model to other graph learning tasks such as the classification task.
**A2:** We would like to point out that the proposed GraphGP can be adapted to handle classification tasks. There are two feasible solutions. (1) Following previous work [41], we can use the one-hot representation of class labels to set up a multi-output regression problem. In this case, each dimension (e.g., the $i$-th dimension of $\mathbf{y} \in R^{C}$) of output values of an input sample corresponds to whether this sample belongs to the $i$-th class (e.g., $\mathbf{y}_i \in \{0, 1\}$). (2) Another solution is to leverage the approximation techniques [r1, r2] to handle the non-Gaussian likelihood in the classification problem setting. For demonstration purposes, we conduct additional experiments to investigate the first solution by adapting GraphGP to the classification task. Table C reports the results (measured by node classification accuracy on the target graph) of GraphGP on node classification tasks. Here we use the social networks from [r3], where Blog 1 and Blog 2 are two disjoint social networks extracted from BlogCatalog. We would like to leave the graph-aware approximation solution for classification as our future work as it is beyond the scope of the current paper.
| Model | Blog 1 $\to$ Blog 2 | Blog 2 $\to$ Blog 1 |
|---------|-----------------------|-----------------------|
| RBFGP | 0.2113$_{\pm 0.0087}$ | 0.2580$_{\pm 0.0083}$ |
| GINGP | 0.5190$_{\pm 0.0225}$ | 0.5097$_{\pm 0.0230}$ |
| GraphGP | 0.5505$_{\pm 0.0052}$ | 0.5244$_{\pm 0.0162}$ |
Table C: Performance comparison on cross-network node classification tasks
[r1] Hensman, James, Alexander Matthews, and Zoubin Ghahramani. "Scalable variational Gaussian process classification." In Artificial Intelligence and Statistics, pp. 351-360. PMLR, 2015.
[r2] Williams, Christopher KI, and Carl Edward Rasmussen. Gaussian processes for machine learning. Vol. 2, no. 3. Cambridge, MA: MIT press, 2006.
[r3] Shen, Xiao, Quanyu Dai, Fu-lai Chung, Wei Lu, and Kup-Sze Choi. "Adversarial deep network embedding for cross-network node classification." In Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 03, pp. 2991-2999. 2020.
**Q3:** The paper only uses Gaussian Process models as baselines. Why not compare against some graph neural network models which are considered to be more effective in graph learning tasks?
**A3:** We would like to clarify that we did compare the proposed GraphGP method with recent transferable graph neural networks in Table 5. More specifically, we consider two recent transferable graph neural network models: GRADE [58] and AdaGCN [14]. Both of them design the transferable graph learning algorithm based on existing graph neural network architectures. The experimental results validated the effectiveness of our approach over these baselines.
**Q4:** The statistics of experimental datasets are missing. It’s unclear how the proposed GraphGP performs when dealing with graphs of different sizes.
**A4:** The data statistics are summarized as follows.
| Data | | \# nodes | \# edges |
|-------------|-----------|----------|----------|
| Twitch | DE | 9,498 | 315,774 |
| | EN | 7,126 | 77,774 |
| | ES | 4,648 | 123,412 |
| | FR | 6,551 | 231,883 |
| | PT | 1,912 | 64,510 |
| | RU | 4,385 | 78,993 |
| Agriculture | Maize | 182 | 364 |
| | Sorghum | 1,610 | 3,220 |
| | Soybean | 389 | 778 |
| Airports | USA | 1,190 | 13,599 |
| | Brazil | 131 | 1,038 |
| | Europe | 399 | 5,995 |
| Wikipedia | Chameleon | 2,277 | 31,421 |
| | Crocodile | 11,631 | 170,918 |
| | Squirrel | 5,201 | 198,493 |
| WebKB | Cornell | 183 | 298 |
| | Texas | 183 | 325 |
| | Wisconsin | 251 | 515 |
Table D: Data statistics
---
Rebuttal Comment 1.1:
Title: Gentle Reminder
Comment: Dear Reviewer pydS,
We would like to thank you again for your thoughtful reviews and constructive questions about our paper. We have carefully provided our answers to your raised questions and concerns. Please feel free to let us know if you have any other questions or concerns regarding our paper. Thanks for your time and consideration.
Best Regards,
Authors of Submission13750 | Summary: This paper studies the problem of transferable graph learning involving knowledge transfer from a source graph to a relevant target graph. To solve this problem, the authors propose a graph Gaussian process (GraphGP) algorithm, which is derived from a structure-aware neural network encoding both sample-level node representation and domain-level graph representation. The efficacy of GraphGP is verified theoretically and empirically in various transferable node regression tasks.
Strengths: 1. This paper is well-organized and the presentation is good.
2. The authors propose a generic graph-structured Gaussian process framework, which encodes local node representation (sample-level) and global graph representation (domain-level) simultaneously.
3. The proposed framework tackles the knowledge transferability in homophily and heterophily graphs using a simple neighborhood selection strategy.
Weaknesses: 1. Related work is inadequate. Graph transfer learning has also been studied in some important literature [1,2,3], but they are not discussed in this paper and should be adopted as baselines to compare with the proposed method.
2. The experimental part lacks the study of each component involved in the method, and the contribution of the proposed component to this paper cannot be proved, such as local node representation (sample-level), global graph representation (domain-level) and the neighborhood selection strategy.
3. The caption of Table 5 is too brief. It should contains more information to explain the table content.
[1] Zhu, Qi, et al. "Transfer learning of graph neural networks with ego-graph information maximization." Advances in Neural Information Processing Systems 34 (2021): 1766-1779.
[2] Han, Xueting, et al. "Adaptive transfer learning on graph neural networks." Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021.
[3] Wu, Jun, Jingrui He, and Elizabeth Ainsworth. "Non-IID Transfer Learning on Graphs." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 9. 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: In addition to covariate shift, label shift is also commonly considered in transfer learning scenarios. It is much more challenging to extend the developed transferable graph Gaussian processes to tackle label shift scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive comments and suggestions. In the following, we present our responses addressing the raised concerns.
**Q1:** Related work is inadequate. Graph transfer learning has also been studied in some important literature [1,2,3], but they are not discussed in this paper and should be adopted as baselines to compare with the proposed method.
**A1:** We would like to clarify that in the paper, we did discuss the related work [1,3] (references [58, 71] in the original paper) and take [3] (reference [58] in the original paper) as one of our baselines in Table 5. In addition, we conduct additional experiments on Airport data sets. Table A further verifies the effectiveness of GraphGP compared to GRADE [3] and EGI [1]. Previous work [2] studied a different transferable graph learning problem. The major goal of [2] is to automatically select the most useful self-supervised tasks in the source graph to help the target task. Thus, it requires multiple unsupervised tasks within the source graph. In contrast, our work focused on modeling the transferability from the source graph with a single supervised task to the target graph. That is, [2] aimed to answer which source task should be leveraged to help the target task, whereas our work focused on answering how knowledge can be transferred across graphs with a single task.
| Model | BR $\to$ EU | EU $\to$ BR | BR $\to$ US |
|---------|-----------------------|-----------------------|-----------------------|
| EGI [1] | 0.5204$_{\pm 0.0357}$ | 0.4786$_{\pm 0.0225}$ | 0.4951$_{\pm 0.0176}$ |
| GARDE [3] | 0.5314$_{\pm 0.0208}$ | 0.4792$_{\pm 0.0296}$ | 0.4354$_{\pm 0.0109}$ |
| GraphGP | 0.5567$_{\pm 0.0246}$ | 0.4983$_{\pm 0.0370}$ | 0.5293$_{\pm 0.0335}$ |
Table A: Performance comparison between GraphGP and [1][3]
**Q2:** The experimental part lacks the study of each component involved in the method, and the contribution of the proposed component to this paper cannot be proved, such as local node representation (sample-level), global graph representation (domain-level) and the neighborhood selection strategy.
**A2:** In Table 7 of Appendix A.7.2, we reported the learned weight $\alpha_i$ of GraphGP in the target domain for different data sets. These results validated the effectiveness of our neighborhood selection strategy in capturing homophily information from local neighborhoods. Besides, we conduct additional ablation studies to validate the effectiveness of local node representation (sample-level), global graph representation (domain-level), and the neighborhood selection strategy. Table B shows the results on Airport and WebKB. Here we consider three variants of GraphGP. (1) GraphGP-Local: It is a variant of GraphGP with only local node representation (i.e., the global graph induced kernel $K_{\nu}$ is removed). (2) GraphGP-Global: Instead of using node representation in GraphGP, GraphGP-Global would directly use raw node attributes/features as local representation while global representation is learned as GraphGP. (3) GraphGP with $\alpha_i \equiv 1$: That is, GraphGP equally aggregates information from local neighborhoods. Table A shows that without local node presentation or global distribution information, the performance of GraphGP drops. Furthermore, without adaptively selecting neighbors, the potential heterophilic neighbors would significantly degrade the performance of GraphGP on WebKB.
| Model | Airport | | WebKB | |
|----------------------------------|-------------------------|-------------------------|-------------------------|-------------------------|
| | BR $\to$ EU | BR $\to$ US | CO $\to$ TX | WS $\to$ TX |
| GraphGP-Local | 0.5475$_{\pm {0.0249}}$ | 0.4986$_{\pm {0.0273}}$ | 0.3998$_{\pm {0.0405}}$ | 0.3243$_{\pm {0.0506}}$ |
| GraphGP-Global | 0.5229$_{\pm {0.0172}}$ | 0.4158$_{\pm {0.0220}}$ | 0.3344$_{\pm {0.0717}}$ | 0.3017$_{\pm {0.0314}}$ |
| GraphGP with $\alpha_i \equiv 1$ | 0.5414$_{\pm {0.0127}}$ | 0.5118$_{\pm {0.0188}}$ | 0.1190$_{\pm {0.0298}}$ | 0.0632$_{\pm {0.0381}}$ |
| GraphGP | 0.5567$_{\pm 0.0246}$ | 0.5293$_{\pm 0.0335}$ | 0.4146$_{\pm 0.0402}$ | 0.3301$_{\pm 0.0585}$ |
Table B: Ablation studies on local node representation (sample-level), global graph representation (domain-level), and the neighborhood selection strategy
**Q3:** The caption of Table 5 is too brief. It should contain more information to explain the table content.
**A3:** As illustrated in lines 368-369, Table 5 reports the results of the proposed GraphGP algorithm and transferable graph neural networks (GNNs) baselines on RU $\to$ PT of the Twitch data set. We will make the caption of Figure 5 much clearer in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' efforts for their response. My concerns have been well addressed and I would like to raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Response by authors
Comment: Dear Reviewer V7Dg,
Thank you very much for acknowledging that your concerns have been well addressed.
Best Regards,
Authors of Submission13750 | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Unleash the Potential of Image Branch for Cross-modal 3D Object Detection | Accept (poster) | Summary: This paper proposes a novel cross-model 3D detector BiProDet, which leverages the information from image domain in two ways. First, it proposes point-to-pixel bidirectional propagation strategy to boost the representation ability of the point cloud backbone. Second, it introduces NLC map estimation as an auxiliary task for training to learn local spatial-aware features of image branch to supply sparse point clouds. The BiProDet exhibits consistent and more significant improvements on the "moderate" and "hard" levels, where objects are distant or highly occluded with sparse points and ranked 1st on the KITTI 3D detection benchmark for the cyclist class.
In my opinion, this paper discusses of the reinforcement mechanism of image branch for 3D detector deeply, and a bidirectional propagation is designed to enhance the performance of the backbone, which is very innovative. And a wealth of ablation experiments has been done to prove the effectiveness of the algorithm proposed in this paper.
Strengths: The paper has several strengths in terms of originality, quality, clarity, and significance.
1. It introduces a 2D auxiliary task called normalized local coordinate (NLC) map estimation to improve the performance of the cross-modal detector by providing the relative position of each pixel inside the object.
2. The paper proposes a novel point-to-pixel feature propagation mechanism that allows 3D geometric features from LiDAR point clouds to enhance the representation ability of 2D image learning.
3. It provides a analysis of the performance bottlenecks of single-modal
detectors.
Weaknesses: 1. The motivation is weak. As the proposed method point-to-pixel is just a fusion strategy in cross-modal modeling, which also been explored in many works such as BEVFusion[1]. So it is relatively weak to serve as the motivation of the paper.
2. The proposed specific 2D auxiliary tasks have been explored in TiG-BEV [2]. Althogh the format of the 2D tasks is not the same, the core of the methods is similar.
[1] https://github.com/mit-han-lab/bevfusion
[2] TiG-BEV: Multi-view BEV 3D Object Detection via Target Inner-Geometry Learning
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer DmuL
We sincerely appreciate the reviewer for your time and effort in reviewing our paper. In the following, we will comprehensively address your concerns.
### **Comment 1:** *The motivation is weak. As the proposed method point-to-pixel is just a fusion strategy in cross-modal modeling, which also been explored in many works such as BEVFusion[1]. So it is relatively weak to serve as the motivation of the paper.*
**Response:** We agree that the bidirectional fusion mechanisms have been explored before. However, the motivation of our work is **not** to obtain stronger fused feature representation with trivial structural innovation of the point-to-pixel module like previous works. Instead, recent research [R1] shows that the representation capability of point-based backbone networks for 3D point clouds is still relatively **insufficient** due to the irregular characteristic, which limits the performance of 3D detectors. Therefore, we aim to directly **boost the representation capability of the point cloud backbone** network by back-propagating **gradients** from the training objectives of the image branch through the proposed point-to-pixel module. We experimentally demonstrated our claim in Table 2 of our manuscript, where the mAP largely boosted from 75.88% (Table2(a)) to 77.01% (Table2(b)), **precisely** indicating that the point-to-pixel propagation indeed strengthens the representation ability of the 3D LiDAR branch. To the best of our knowledge, it has never been identified previously that **2D** auxiliary tasks can be used to improve the representation ability of the **3D** backbone. Besides, **Reviewer gAXu** also acknowledged such a manner is non-trivial and interesting.
[R1] Zhang et al., PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal Distillation for 3D Shape Recognition, TMM'23.
### **Comment 2:** *The proposed specific 2D auxiliary tasks have been explored in TiG-BEV [2]. Although the format of the 2D tasks is not the same, the core of the methods is similar.*
**Response:** We agree with the reviewer that the adopted 2D supervision tasks are similar. The camera-based TiG-BEV utilizes inner-depth supervision to enhance the understanding of object-wise spatial structures. By contrast, our cross-modal method incorporates the image branch to learn local spatial-aware features, complementing the information the sparse point clouds provide. It is worth noting that we have made our article and source code publicly available **prior** to the release of TiG-BEV.
It would be better to **holistically** understand the two key components, i.e., the NLC map estimation and point-to-pixel module, rather than in isolation. On one hand, we can learn extra information from the image branch with auxiliary tasks. On the other hand, we demonstrate that the gradients back-propagated from the image branch through the point-to-pixel module can boost the representation ability of the point cloud backbone. The suitable 2D tasks and the point-to-pixel module are **intricately linked**, and it is only by combining them that the greatest performance improvement can be achieved.
---
Rebuttal 2:
Title: Rebuttal?
Comment: Dear reviewer,
Can I ask you to please see if the rebuttal addresses your concerns?
---
Rebuttal Comment 2.1:
Comment: The rebuttal solves my concern on the novelty part. For the comment 1, using gradient from image branch to help the lidar branch is interesting. For the comment 2, to support your claim that "The suitable 2D tasks and the point-to-pixel module are intricately linked", is there any ablation on considering "the NLC map estimation and point-to-pixel module" in isolation or as a whole?
---
Reply to Comment 2.1.1:
Title: Thanks for your valuable feedback and your recognition!
Comment: Thank you for providing your valuable feedback. We are delighted to learn that our rebuttal effectively addresses your concerns regarding the novelty issue. Your recognition of our interesting idea is greatly appreciated.
In the manuscript, we have presented the results of the corresponding ablation studies on the relationship between NLC map estimation and the point-to-pixel module. These results can be found in **Table 2** and **Table 3**. To facilitate your understanding, we will now summarize the findings to support our claim about the intricate relationship between NLC map estimation and the point-to-pixel module.
As depicted in the table below, it becomes evident that the introduction of either the point-to-pixel module or the NLC map supervision individually results in mAP enhancements of 0.43% and 0.72%, respectively. However, the combination of both elements yields the most significant improvement, leading to an impressive increase of 1.89% in mAP.
| Exp. | Point2Pixel | 2D NLC | Veh. | Ped. | Cyc. | Mean |
| :--: | :---------: | :---------: | :----: | :----: | :----: | :----: |
| (a) | ✖ | ✖ | 86\.99 | 63\.78 | 79\.49 | 76\.75 |
| (b) | ✔ | ✖ | 87\.10 | 64\.52 | 79\.94 | 77\.18 |
| (c) | ✖ | ✔ | 86\.94 | 64\.85 | 80\.62 | 77\.47 |
| (d) | ✔ | ✔ | 87\.18 | 67\.52 | 81\.21 | 78\.64 |
We hope that this summary provides further clarity on the relationship between NLC map estimation and the point-to-pixel module. In the final version, we will highlight this point. Once again, we sincerely appreciate your valuable feedback and support. | Summary: This paper studies 3D object detection with multi-modal inputs (image and point cloud). This study uses a two-stage 3D object detection pipeline and proposes two approaches for further performance improvements. Firstly, the authors propose a bidirectional feature module to fuse point cloud and image features. Secondly, the authors propose several 2D and 3D auxiliary tasks to improve representation learning. The experiments validated the proposed approaches on KITTI and Waymo Open Dataset.
Strengths: 1. the ablation study is well-designed and extensive to validate the effectiveness of the proposed NLC map representation and fusion network.
2. the idea of using NLC map is well-motivated.
3. the performance on KITTI and Waymo Open Dataset is competitive in several metrics.
Weaknesses: 1. the technical novelty is somewhat limited. Bidirectional feature fusion is not new, for example, see "FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation". The proposed fusion approach is similar to this paper, which is not cited. In addition, using auxiliary loss to improve representation learning is also not very novel.
2. Auxiliary tasks require extra labels. Do other approaches in Table 1 use semantic masks as supervision?
3. Compared with [45], the performance in the EASY category is inferior.
4. Previous fusion-base approaches such as [1] provided evaluations on the Nuscenes dataset yet this paper does not.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why the performance on the EASY category of KITTI is inferior compared with [45]? Can the authors further improve it?
2. Can the authors provide quantitative comparisons with [1] on the Nuscenes dataset?
3. have the authors attempted other fusion approaches instead of the proposed one? What will be the performance difference?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have not addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer s1rE
We sincerely appreciate you raising insightful points that helped improve our work. The comments have helped us better articulate the key contributions and value of our work.
### **Comment 1:** *The technical novelty is somewhat limited. Bidirectional feature fusion is not new.*
**Response:** We agree that bidirectional fusion techniques have been explored in prior works. However, the main contribution is **not** proposing a structurally new fusion module, e.g., improving the bidirectional fusion module with an attention mechanism. Instead, we aim to **directly boost the representation capability of the point cloud backbone** network by back-propagating **gradients** from the training objectives of image branch through the proposed point-to-pixel module, due to the relatively weak representation capability of current point-based backbones for processing irregular 3D point clouds (see Lines 39-46 of the manuscript). We experimentally demonstrated our claim in Table 2 of our manuscript, where the mAP largely boosted from 75.88% (Table2(a)) to 77.01% (Table2(b)), **precisely** indicating that the point-to-pixel propagation indeed strengthens the representation ability of the 3D LiDAR branch. To the best of our knowledge, it has never been identified previously that 2D auxiliary tasks can improve the representation ability of 3D backbone. Besides, **Reviewer gAXu** also acknowledged such a manner is non-trivial and interesting.
### **Comment 2:** *Using auxiliary loss to improve representation learning is not very novel.*
**Response:** Previous works have used 3D and 2D auxiliary losses to improve the representation ability of 3D and 2D backbones respectively. And our key finding is that the gradients back-propagated from the training objectives of the image branch can boost the representation ability of the point cloud backbone (see Table 2 of our manuscript). To the best of our knowledge, it has **never** been identified previously that **2D** auxiliary tasks could so effectively improve the representation power of **3D** backbone.
### **Comment 3:** *Auxiliary tasks require extra labels. Do other approaches in Table 1 use semantic masks as supervision?*
**Response:** Thanks for raising a fair point. Among the several methods listed in Table 1, MMF used extra depth completion supervision and PointPainting used extra semantic segmentation labels. Besides, it is **common** to use extra image segmentation labels in cross-modal 3D detectors, such as EPNet and MVX-Net.
### **Comment 4:** *Compared with [45], the performance in the EASY category is inferior. Can the authors further improve it?*
**Response:** Thank you for the detailed review. Our method particularly benefits challenging cases with **sparser** point clouds, which are more likely to occur for distant or occluded objects in the moderate and hard categories. Specifically, in this work, we learn local spatial-aware features in the image branch under the supervision of 2D NLC map supervision, which serves as a complement to the sparse point clouds. While [45] presents a novel embedding-querying paradigm and achieves higher APs in the easy category, our overall performance **significantly** surpasses it. Following your suggestion, we will explore ways to improve the AP for easy cases by incorporating stronger image backbones and leveraging the recent advances in the field of 3D object detection.
### **Comment 5:** *Previous fusion-base approaches such as [1] provided evaluations on the Nuscenes dataset yet this paper does not.*
**Response:** We appreciate the suggestion for including experiments on the NuScenes dataset. To be honest, it is very difficult to achieve competitive performance on the large-scale Nuscenes dataset in such a short time with limited GPU resources. Especially, the baseline **point-based** detector we used performs **poorly** on NuScenes or Waymo, which increases the experiment workload. Actually, when preparing this work, we have made great efforts to improve the performance of point-based detectors on the Waymo dataset to be **comparable** to the BEV-based detectors. Despite the missing experiments on NuScenes, we posit our method sheds light on point-based methods by offering new insights and an effective framework. Also, we believe the strong results on KITTI and Waymo sufficiently demonstrate the effectiveness of our proposed method and validate our motivation.
### **Comment 6:** *Have the authors attempted other fusion approaches instead of the proposed one? What will be the performance difference?*
**Response:** We have not tried to compare the performance with other fusion approaches, as our main contribution is **not** proposing a structurally new fusion method, e.g., improving the bidirectional fusion module with an attention mechanism. Instead, we explore point-to-pixel propagation with the goal of **directly enhancing the representation capability of the 3D LiDAR backbone** network by back-propagating gradients from the training objectives of the image branch. The minimalist point-to-pixel design best reveals this effect, which has never been identified previously to the best of our knowledge. The core ideas could be integrated with other fusion approaches in the future. But we believe our work provides valuable insights despite not comparing the performance with previous fusion methods.
---
Rebuttal 2:
Title: Rebuttal?
Comment: Dear reviewer,
Can I ask you to please look at the rebuttal and see if it addresses your concerns?
---
Rebuttal Comment 2.1:
Comment: The rebuttal resolves my concerns about the novelty of the proposed approach. However, the authors could not provide quantitative results to convince me about the performance part (e.g., weaker performance on the EASY category, fewer comparisons on other datasets, and with other fusion approaches ).
---
Reply to Comment 2.1.1:
Title: Thanks for your valuable feedback
Comment: Dear Reviewer s1rE,
We deeply appreciate your valuable feedback and thoughtful examination of our work. We are delighted to learn that our rebuttal effectively addresses your concerns regarding the novelty issue. In response to your concerns about the quantitative results:
**Performance on the EASY category:** While we acknowledge that our method may not achieve peak performance in every category, it is essential to recognize that the overall efficacy of a method is not solely dictated by its optimization in all metrics. We believe that the comprehensive performance of our approach, particularly in challenging scenarios, is indicative of its robustness and versatility.
**Comparisons on other datasets:** With regard to the NuScenes dataset, we concur that there is room for additional benchmarking. However, the computational and resource constraints we faced during our experiments limited our explorations. It is noteworthy that our approach has shown promising results on benchmarks like KITTI and Waymo, which are well-regarded in the community. We believe that these results validate the effectiveness of our proposed method.
**Comparisons with other fusion approaches:** As our main contribution is **not** proposing a **structurally** new fusion method, we have not tried to compare the performance with other fusion approaches. Our primary objective of the point-to-pixel module is to enhance the **representation capability** of the 3D LiDAR backbone network by leveraging gradients from the training objectives of the image branch. We experimentally demonstrated our claim in Table 2 of our manuscript. We posit the findings bring insights into the cross-modal knowledge distillation for 3D detection. Thus, the absence of a direct comparison with other fusion approaches **does not diminish** our contribution.
We genuinely hope that these clarifications provide a clearer perspective on our research and its merits. Thanks again for your valuable time and feedback.
Warm regards,
Authors | Summary: This paper propose a multi-modal usion-based 3d object detector named BiProDet. BiProDet adopts a bidirectional feature propagation mechanism, i.e., point-to-pixel module and pixel-to-point module. Besides, BiProDet propose a new auxiliary task called Normalized Local Coordinate (NLC) map.
Strengths: The paper is well written, and the ablation study demonstrate the effectiveness of the proposed method.
Weaknesses: 1. Comparison with some closely related work is missing. For example, the proposed method is similar to EPNet++ ( [21] in the submitted manuscript) in several aspects. Both of them design bidirectional feature fusion and adopt semantic segmentation as an auxiliary task.
2. BEV representation is receiving increasing attention recently. However, a comparison with these methods is missing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The proposed method is closely related to EPNet++. It would be better to provide a more detailed comparison between these two approaches.
2. BEV representation is receiving increasing attention recently and achieves sota results on many benchmarks. It would be better to compare the proposed method with BEV-base fusion methods (e.g. BEVFusion).
3. It would be more convincing to evaluate the proposed method on more datasets besides KITTI, e.g., Nuscenes.
3. Why NLC is more beneficial for small objects such as ped. and cyc.?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed fusion method is not as flexible as the BEV representation. And the comparison with BEV-based methods is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer 5jwD
We sincerely appreciate the time and effort you have dedicated to providing such insightful and comprehensive feedback on our work. The comments you have raised help us identify areas for improvement in our work. The review process has been informative in refining our paper.
### **Comment 1:** *Comparison with some closely related work is missing. It would be better to provide a more detailed comparison between the proposed method and EPNet++.*
**Response:** Thanks for pointing out this related paper. It is also worth mentioning that the authors were **indeed** unaware of the preprint version of EPNet++ in Arxiv at the early stage of this project. Here we summarize the differences between our work and EPNet++:
- EPNet++ proposes an LI-Fusion layer to enable more interaction between the two modalities and finally obtains more comprehensive features. By contrast, we explore point-to-pixel propagation from the perspective of **directly improving the representation capability of the 3D LiDAR backbone** network. Specifically, by designing feasible 2D auxiliary tasks, the **gradients** back-propagated from the training objectives of the image branch can boost the representation ability of the point cloud backbone. We experimentally and exactly demonstrated our claim in Table 2 of our manuscript. To the best of our knowledge, it has never been identified previously that **2D** auxiliary tasks can be used to improve the **3D** backbone.
- We also verify that the bidirectional propagation benefits **not only** the 3D object detection task **but also** the 2D semantic segmentation task (see Table 7 of the supplementary material). The decent results show the potential of 2D-3D joint learning between 3D object detection and more 2D scene understanding tasks.
- We design a **concise yet effective** bidirectional propagation strategy without bells and whistles, which achieves significantly better performance than EPNet++ [4], according to its reported results on the KITTI test set. And we make the source code publicly available before EPNet++.
### **Comment 2:** *It would be better to compare the proposed method with BEV-base fusion methods. It would be more convincing to evaluate the proposed method on more datasets besides KITTI, e.g., Nuscenes.*
**Response**: We appreciate the suggestion for including experiments on the NuScenes dataset. To be honest, it is very **difficult** to achieve competitive performance on the large-scale Nuscenes dataset in such a short time with limited GPU resources. Especially, the baseline **point-based** detector we used performs **poorly** on NuScenes or Waymo, which increases the experiment workload. Actually, when preparing this work, we have made great efforts to improve the performance of point-based detectors on the Waymo dataset to be **comparable** to the BEV-based detectors. Despite the missing experiments on NuScenes, we posit our method sheds light on point-based methods by offering new insights and an effective framework. Also, we believe the strong results on KITTI and Waymo sufficiently demonstrate the effectiveness of our proposed method and validate our motivation.
### **Comment 3:** *Why NLC is more beneficial for small objects such as ped. and cyc.?*
**Response:** Insightful observation! Small objects like pedestrians tend to have **sparser** point clouds and be more sensitive to occlusion. While their LiDAR representations may be incomplete, the contour and appearance cues could still be clear in RGB images. By learning local spatial-aware features from images supervised by 2D NLC maps, we can better complement the sparse LiDAR observations for small objects. Thus, the local spatial information from images provides greater benefits for small objects with sparser point clouds.
### **Comment 4:** *The proposed fusion method is not as flexible as the BEV representation.*
**Response:** While BEV methods show flexibility on large datasets, point-based approaches also have advantages, like **preserving** fine details without voxelization. In this work, we made great efforts to improve the performance of point-based detectors on the Waymo dataset to be comparable to the BEV-based detectors. For example, we found that it is crucial to develop a separate head for each category like CenterPoint, aiming to learn the biases of different categories and solve the category imbalance problem. We believe our method **sheds light on** point-based methods by offering new insights and an effective framework.
---
Rebuttal 2:
Title: Gentle Reminder
Comment: Dear Reviewer 5jwD,
Thank you for taking the time to review our submission and the favorable recommendation. As the discussion phase between the reviewers and authors is coming to an end, we would be grateful if you could acknowledge receipt of our responses and let us know if they address your concerns. We remain open and enthusiastic about any further discussions or clarifications you might deem necessary.
Warm regards,
Authors | Summary: This work addresses the task of 3D object detection from LiDAR and cameras. Their main contribution is developing a joint 2D and 3D stream architecture, with simple bidirectional feature flow in the backbones. To improve this, they propose to predict NLC maps in the image stream. The proposed components demonstrate performance improvement on the KITTI dataset.
Strengths: - The paper is clear and easy to read, with diagrams to explain crucial parts.
- The NLC prediction significantly improves performance, supporting the core hypothesis in the paper.
- As opposed to some existing works with joint 2D-3D backbones, the proposed bidirectional feature flow is simple without bells and whistles and complicated components.
- It is a non-trivial and interesting result that point-to-pixel flow during training only can still improve 3D detection performance, showing that 2D gradients can help 3D model learning.
Weaknesses: - The statement “However, there is no evidence that these methods actually enhance the representation capability of the 3D LiDAR backbone network, which is also one of the most critical influencing factors.” in L41 - L46 (as well as its encompassing paragraph) is unclear to me and seems unsubstantiated. Many existing works (PointPainting, AutoAlign, etc) have enhanced the point cloud/3D voxels with 2D semantics before/while using the 3D backbone network and have demonstrated significant improvements in detection metrics by training end-to-end.
- I am slightly concerned that all of the ablative results shown are on the KITTI dataset, which is a small-scale dataset with low diversity. I would appreciate some ablations on at least the key components (NLC prediction, point-to-pixel flow) on the Waymo dataset. Nonetheless, it is impressive that a point-based method can achieve such strong performance on Waymo.
- Point-to-pixel flow during training only does improve 3D detection performance, which seems to demonstrate that training a 2D CNN can help the 3D network learn better features. However, this is slightly confounded with the possibility that *NLC* supervision is what is improving 3D detection when point-to-pixel flow is added, not the 2D CNN. It would strengthen the paper if point-to-pixel flow improves performance even when 3D NLC supervision (similar to that in PartA2) is done.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Please reference the weaknesses section for questions.
- I also want to ask if any augmentations are done in 2D. For instance, LR flipping may introduce ambiguity in predicting left vs right side of an object.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Authors do not appear to have included a limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer gAXu
We sincerely appreciate the time and effort in evaluating our manuscript. Your meticulous review and thoughtful critiques truly reflect your deep domain expertise and diligence as a reviewer. In what follows, we will address your remaining concerns comprehensively and clearly.
### **Comment 1:** *The statement "there is no evidence that these methods actually enhance the representation capability of the 3D LiDAR backbone network" is not clear and seems unsubstantiated. Many existing works (PointPainting, AutoAlign, etc.) have enhanced the point cloud/3D voxels with 2D semantics before/while using the 3D backbone network and have demonstrated significant improvements.*
**Response:** Sorry for the confusion caused. Prior works have indeed shown performance gains from **feature-level** fusion. By contrast, our key contribution is demonstrating that with feasible 2D auxiliary tasks, the **gradients** back-propagated from the training objectives of the image branch can boost the **representation ability** of the point cloud backbone. As analyzed in Table 2 of the manuscript, the mAP improves from 75.88% (Table2(a)) to 77.01% (Table2(b)) by adding the point-to-pixel module, precisely indicating the strengthened representation ability of the point cloud backbone because **only** the point cloud branch is used during inference. To the best of our knowledge, it has never been identified previously that **2D** auxiliary supervision tasks can be used to improve **3D** backbone.
### **Comment 2:** *I would appreciate some ablations on at least the key components (NLC prediction, point-to-pixel flow) on the Waymo dataset. Nonetheless, it is impressive that a point-based method can achieve such strong performance on Waymo.*
**Response:** Thank you for your valuable comments. We agree with the reviewer that the ablation study will be more convincing on the Waymo dataset. As suggested, we conducted experiments on 20% Waymo training data, verifying consistent improvements from point-to-pixel propagation and 2D NLC supervision (see table below). We also appreciate you noting the strong Waymo results of our **point-based** approach. In fact, we made great efforts to improve the performance of point-based detectors on Waymo. And we found that it is **crucial** to develop a separate head for each category like CenterPoint, aiming to learn the biases of different categories and solve the category imbalance problem.
Table T0: Effect of the key components in BiProDet on WOD val set. We report the results of APH on LEVEL 2.
| Exp. | Point2Pixel | 2D NLC | Veh. | Ped. | Cyc. | Mean |
| :--: | :-: | :----: | :----: | :----: | :----: | :-------: |
| (a) | ✖ | ✖ | 65\.67 | 57\.34 | 71\.64 | 64\.88 |
| (b) | ✔ | ✖ | 66\.67 | 58\.36 | 72\.39 | 65\.80 |
| (c) | ✔ | ✔ | 67\.75 | 59\.25 | 73\.9 | 66\.96 |
### **Comment 3:** *It is slightly confounded with the possibility that NLC supervision is what is improving 3D detection when point-to-pixel flow is added, not the 2D CNN. It would strengthen the paper if point-to-pixel flow improves performance even when 3D NLC supervision (similar to that in Part-A^2) is done.*
**Response:** Thanks for your valuable suggestion. We have experimentally demonstrated the effectiveness of point-to-pixel propagation in Table 2 of our manuscript. Note the mAP is largely boosted from 75.88% (Table2(a)) to 77.01% (Table2(b)) with only a **single variable**, i.e., the point-to-pixel module is added. And following your suggestions, we further conducted the ablative experiment of point-to-pixel propagation when 2D NLC supervision is **replaced** with 3D NLC supervision. As shown in the table below, the point-to-pixel module improves the mAP consistently by 0.7%, validating its efficacy independent of NLCs supervision type.
| Exp. | 3D NLC | P2I | Veh. | Ped. | Cyc. | Mean |
| :--: | :----: | :-: | :----: | :----: | :----: | :----: |
| (a) | ✔ | ✖ | 87\.09 | 63\.94 | 79\.91 | 76\.98 |
| (b) | ✔ | ✔ | 87\.23 | 65\.19 | 80\.62 | 77\.68 |
### **Comment 4:** *I also want to ask if any augmentations are done in 2D. For instance, LR flipping may introduce ambiguity in predicting left vs right side of an object.*
**Response:** We appreciate you raising this important point. We do not perform any data augmentation on the image input. The LR flipping does introduce ambiguity in predicting the relative location of pixels inside an object.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing all of my concerns. The proposed components demonstrate good improvement on the larger Waymo dataset as well, and the 2D gradient flow seems to improve 3D even without 3D NLC supervision, which is an interesting observation. As such, I maintain my original rating. I do hope that the authors could revise the manuscript to make Concern #1 more clear.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer gAXu
Comment: Dear Reviewer gAXu,
We are delighted to learn that our rebuttal effectively addresses your concerns. To address Concern #1, we will ensure that the necessary revisions are made to enhance clarity in the manuscript. Thanks again for your time and efforts in reviewing our manuscript.
Warm regards,
Authors | Rebuttal 1:
Rebuttal: ### General Response
We thank all reviewers for your time and constructive comments. Here we want to summarize a few key clarifications concerning the contributions of our work again:
**(1) The novelty of our work.** The key motivation of our point-to-pixel module is not to propose structurally novel fusion mechanisms. Instead, we aim to **directly boost the representation capability of the point cloud backbone** network by back-propagating **gradients** from the training objectives of the image branch, due to the relatively weak representation capability of current point-based backbones for processing irregular 3D point clouds (see Lines 39-46 of the manuscript). We experimentally demonstrated our claim in Table 2 of our manuscript, where the mAP improves from 75.88% (Table2(a)) to 77.01% (Table2(b)) by adding the point-to-pixel module, **precisely** indicating the strengthened representation ability of the point cloud backbone because **only** the point cloud branch is used during inference. To our knowledge, exploiting **2D** auxiliary tasks to improve **3D** backbones has not been identified before. The concise point-to-pixel design best demonstrates this capability.
It would be better to **holistically** understand the two key components, i.e., the NLC map estimation and point-to-pixel module, rather than in isolation. On one hand, we can learn local spatial-aware information from the image branch with NLC map supervision. On the other hand, we demonstrate that the gradients back-propagated from the image branch through the point-to-pixel module can boost the representation ability of the point cloud backbone. The suitable 2D tasks and the point-to-pixel module are **intricately** linked, and it is only by combining them that the highest detection accuracy can be achieved.
Besides, we demonstrate that the 2D-3D joint learning paradigm benefits **not only** the 3D object detection task **but also** the 2D semantic segmentation task (see Table 7 of the supplementary material). The point features can naturally complement RGB image features by providing 3D geometry and semantics, which are robust to illumination changes and help distinguish different classes of objects for 2D visual information. The results suggest the potential of joint training between 3D object detection and more 2D scene understanding tasks in autonomous driving.
**(2) Experiments on the NuScenes dataset.** Due to limited computational resources, achieving competitive performance on NuScenes in such a short time is very difficult. The employed baseline **point-based** detector (the LiDAR branch) performs **poorly** on NuScenes or Waymo, increasing the experiment workload. Frankly speaking, when preparing this paper, we have made great efforts to improve the performance of point-based detectors on the Waymo dataset to **comparable** accuracy to the BEV-based detectors. Despite the missing experiments on NuScenes, we posit our method **sheds light on** point-based methods by offering new insights and an effective framework. In conclusion, we believe the strong results on KITTI and Waymo sufficiently demonstrate the effectiveness of our proposed method and validate our motivation.
We will make the reviews and author discussion public. Besides, we will include the newly added experiments and analysis in the final manuscript/supplementary material. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a method for multimodal (image and LiDAR) 3D object detection. The main purpose of the proposed method is to enhance the image branch. The authors design the task of NLC Map estimation, which is to predict the normalized local coordinates of points within a ground-truth box. The prediction happens on the image plane and the points are projected (carrying their NLC labels). In addition, image semantic segmentation is also adopted as an auxiliary task for the image branch. The image and LiDAR branch fuse their features during the forward of their backbones. The LiDAR branch predicts 3D instances based on the RPN proposals and RoI-pooled fused features.
Strengths: 1. Although the idea of predicting local coordinates within a bounding box is similar to Part-A^2, it is interesting to see that it can effectively enhance the image branch. I am also wondering if additionally introducing local coordinate prediction for the LiDAR branch can further boost performance.
2. The proposed method achieves SOTA results on the KITTI dataset, and the effectiveness of the auxiliary tasks are proved by ablation study.
Weaknesses: 1. The experiments are only conducted on the KITTI dataset, in fact, there are a bunch of recent multimodal 3D detection methods that provide results on the NuScenes dataset, so it would be more convincing with comparisons on NuScenes.
2. The bidirectional feature propagation module is simply gathering features from another modality by coordinate projection and feature transformation, which is not significantly novel.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Would it be more straightforward to use local coordinate prediction in the LiDAR branch like Part-A^2? NLC estimation can also be performed for the points if they are not projected to the image plane?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors did not address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer bgiL
We sincerely appreciate the reviewer's time and effort in reviewing our paper. Thanks for your valuable comments and recognition of our work. In the following, we will comprehensively address your concerns.
### **Comment 1:** *The experiments are only conducted on the KITTI dataset. The results would be more convincing with comparisons on NuScenes.*
**Response:** We appreciate your suggestion for conducting experiments on the NuScenes dataset. However, due to **limited** computational resources, achieving competitive performance on NuScenes in such a short time is very difficult. The baseline point-based detector (the LiDAR branch) we used performs **poorly** on NuScenes or Waymo, further increasing the experiment workload. Frankly speaking, when preparing this paper, we have made great efforts to improve the performance of point-based detectors on the Waymo dataset and achieve **comparable** accuracy to the BEV-based detectors. Despite the missing experiments on NuScenes, we posit our method **sheds light on** point-based methods by offering new insights and an effective framework. Also, we believe the strong results on KITTI and Waymo sufficiently demonstrate the effectiveness of our proposed method and validate our motivation.
### **Comment 2:** *The bidirectional feature propagation module is simply gathering features from another modality by coordinate projection and feature transformation, which is not significantly novel.*
**Response:** The key motivation of our point-to-pixel module is **not** to propose structurally novel fusion mechanisms. Instead, we aim to **directly boost the representation capability of the point cloud backbone** network by back-propagating **gradients** from the training objectives of the image branch, due to the relatively weak representation capability of current point-based backbones for processing irregular 3D point clouds (see Lines 39-46 of the manuscript). We experimentally and exactly demonstrated our claim in Table 2 of our manuscript, where the mAP improves from 75.88% (Table2(a)) to 77.01% (Table2(b)) by adding the point-to-pixel module, indicating strengthened representation ability of the point cloud backbone. To our knowledge, exploiting **2D** auxiliary tasks to improve the representation capability of **3D** backbones has not been identified before. The concise point-to-pixel design best demonstrates this capability. Besides, **Reviewer gAXu** also acknowledged such a manner is non-trivial and interesting.
### **Comment 3:** *Would it be more straightforward to use local coordinate prediction in the LiDAR branch like Part-A^2?*
**Response:** Thank you for the valuable suggestion. We conducted experiments by predicting NLCs directly in the LiDAR branch, which improves mAP from 77.18% to 77.68% (see table below). By contrast, supervising 2D NLC map estimation further boosts mAP to 78.64%, showing superiority in exploiting local spatial-aware features from the image branch to complement sparse point cloud representations. The results validate the advantage of our design.
| Setting | Car | Ped. | Cyc. | mAP |
| :------ | :----- | :----- | :----- | :----- |
| w/o NLC | 87\.10 | 64\.52 | 79\.94 | 77\.18 |
| 3D NLC | 87\.23 | 65\.19 | 80\.62 | 77\.68 |
| 2D NLC | 87\.18 | 67\.52 | 81\.21 | 78\.64 |
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal roughly addressed my concerns, and I maintain my previous rating.
---
Rebuttal 2:
Title: Gentle Reminder
Comment: Dear Reviewer bgiL,
Thank you for taking the time to review our submission and the favorable recommendation. As the discussion phase between the reviewers and authors is coming to an end, we would be grateful if you could acknowledge receipt of our responses and let us know if they address your concerns. We remain open and enthusiastic about any further discussions or clarifications you might deem necessary.
Warm regards,
Authors | null | null | null | null | null | null |
Penguin: Parallel-Packed Homomorphic Encryption for Fast Graph Convolutional Network Inference | Accept (poster) | Summary: To alleviate the dramatic computation and memory overhead in HE-based GCN inference, this paper proposes a new HE-based ciphertext packing technique named Penguin. Penguin focuses on a sequence of matrix-matrix multiplications which is the bottleneck during private GCN inference. Thus, it employs an effective two-dimension parallel packing technique and an interleaved assembly technique to reduce the number of HE rotations while make use of the blank slots in polynomials. This paper also provides theoretical analysis and experimental validation to demonstrate the speedup achieved by Penguin in accelerating private GCN inference. Results show that Penguin can achieve up to ∼ 10× speedup and around ∼ 79% reduction in computational memory overhead, outperforming SOTA solutions.
Strengths: 1. Reasonable motivation. This paper first analyzes the latency breakdown in private GCN inference and finds that the bottleneck is a sequence of matrix-matrix multiplications $A\cdot X\cdot W$ due to a great deal of Rotations and CMults. Additionally, the inefficient packing method is the deeper reason. Therefore, this paper proposes several packing techniques to solve this issue.
2. Direct and effective method. Based on the node-wise and feature-wise packing, this paper proposes Two-Dimension Parallel-Packing which selects tiling size to minimize the number of rotations. What's more, when feature size is reduced in aggregation, Interleaved Assembling can be used to make a denser packing. Experiments demonstrate the effectiveness of the two methods.
3. Well written. The writing quality of the paper is excellent, with logical coherence.
Weaknesses: 1. Missing comparison with several highly related works. The node-wise and feature-wise packing methods described in Sec 3 are actually common in SIMD-based HE methods. Thus, I think this paper needs to have a comparison with SIMD-based work like Gazelle [1].
[1] Juvekar, Chiraag, Vinod Vaikuntanathan, and Anantha Chandrakasan. "{GAZELLE}: A low latency framework for secure neural network inference." 27th USENIX Security Symposium (USENIX Security 18). 2018.
[2] Reagen, Brandon, et al. "Cheetah: Optimizing and accelerating homomorphic encryption for private inference." 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2021.
[3] Hao, Meng, et al. "Iron: Private inference on transformers." Advances in Neural Information Processing Systems 35 (2022): 15718-15731.
2. There exists another type of method named coefficient encoding which has extremely high performance in matrix-matrix and matrix-vector multiplication[2,3]. So the SOTA methods claimed by the author are questionable.
3. The baseline of the paper is not very strong. How does the proposed method compare to CryptoGCN and other works that directly optimize HE for GCN instead of CNN?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How does the proposed method compare to the coefficient encoding-based method like Cheetah?
2. How about the comparison with stronger baselines like CryptoGCN that optimizes HE for GCN?
3. In Table 2, Penguin(32,128)+IA significantly reduces the number of CMults over 10 times, but the latency only reduces 20% compared to Penguin(32,128). Why is this situation happening?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Response to Weakness 1-Missing Comparison:**
Thanks for your constructive feedback. We have conducted additional experiments and reported the comparison results with Gazelle [1] in Table 1 below. As Table 1 shows, our proposed solution outperforms that of Gazelle across all three datasets. The reason is as follows: the proposed hybrid approach in Gazelle[1] is mainly designed to solve the single matrix-multiplication problem in the general FC layer (input feature > output feature). However, GCN inference contains two parts of computation-one is FC layer (XW) and the other is adjacency matrix multiplication (AX) that is a square matrix (fixed size). When computing with an adjacency matrix, the proposed method in [1] would not optimize its packing but instead just adopt the diagonal wise-encoding method from [2]. Thus, our proposed Penguin-two-way parallel-encoding with optimized sub-block matrix size could perform better here.
**Table 1**
| | Cora | Citeseer | PubMed |
|---|---|---|---|
| Gazelle| 3832.36s| 4727.94s| 158655.54s|
| Penguin(32,128)+IA| 660.67s| 928.1s| 30522.43s|
**2. Response to Weakness 2 and Question 1-Why CKKS not coefficient encoding method [3]:**
Thanks for your constructive comments regarding coefficient-encoding work. There are several differences between our solution and the referred coefficient encoding work. First,
our work has a different threat model setting compared to [3]. The coefficient encoding rotation-free HE in [3] requires clients assistance for private inference, where our work assumes the client doesn’t have much computation capability and would not participate in the inference computation process. The method used in [3] is BFV, in which it requires the client to do the decryption for the extracted LWE ciphertext, re-encryption for intermediate results and then send the new ciphertexts to the server, then the server can perform the next layer’s computation. As a result, communication is the bottleneck in their proposed method [3]. However, in our CKKS HE without-client-aid setting, the server side only requires the client to encrypt and send data to the server once, and the server can perform the computation one time and only send the final encrypted result to the client. It does not require the frequent communication between client and server like [3]. Therefore, HE computation, instead of communication, becomes the bottleneck. In our view, our HE without-client-aid setting and [3]’s MPC+HE setting represent two orthogonal directions to realize PPML, and they are suitable for different private inference scenarios. Our paper aims at reducing the server’s computation overhead under the HE without-client-aid setting. It is very difficult to directly compare these two approaches, as both have pros and cons under different assumptions. In the future, it would be interesting to explore the coefficient encoding method for HE-based GCN inference. We will include such discussion in the related work based on the reviewer’s further suggestions.
**3. Response to Weakness 3 and Question 2-Comparison with CryptoGCN:**
Thanks for your comments. Our experiments have included a fair comparison with CryptoGCN. For CryptoGCN, the key idea is to use Adjacency Matrix Aware (AMA) ciphertext encoding technique, followed by the patterned sparse matrix partitioning to take advantage of the sparsity of unencrypted adjacency matrix A, so as to reduce the costly HE operations in GCN.
While deploying AMA encoding technique is still applicable when both graph node features and structure are encrypted, we would like to emphasize that the patterned sparse matrix partitioning technique is not applicable since here the adjacency matrix A is encrypted and the sparsity cannot be exploited. Therefore, to make our work comparable with CryptoGCN under the same setting (both encrypted features and adjacency matrix), in our experiments, we implemented CrytoGCN’s AMA encoding format as the node-wise format which encodes the features from the same node into one ciphertext (See Table 2 in our paper, Cora-Penguin(1433,1), Citeseer-Penguin(3703,1), and PubMed-Penguin(19717,1)). As Table 2 shows, the performance of our method (Penguin 32,128) is much better than those with the AMA encoding from CryptoGCN. The reason is twofold: 1) encrypted adjacency matrix does not offer the public sparsity information that can be exploited by CryptoGCN; 2) two-dimensional optimization (feature-node AXW) instead of CryptoGCN’s AMA encoding focusing on one-direction optimization (adjacency matrix multiplication AX) only.
**4. Response to Question 3-concern about the weird data**
Thanks for your careful review. It is 93K instead of 9.3K. Sorry about the typo. We will fix this part in future revision of our manuscript.
Reference:
[1] Juvekar, Chiraag, Vinod Vaikuntanathan, and Anantha Chandrakasan. "{GAZELLE}: A low latency framework for secure neural network inference." 27th USENIX Security Symposium (USENIX Security 18). 2018.
[2] Halevi, Shai, and Victor Shoup. "Algorithms in helib." Advances in Cryptology–CRYPTO 2014: 34th Annual Cryptology Conference, Santa Barbara, CA, USA, August 17-21, 2014, Proceedings, Part I 34. Springer Berlin Heidelberg, 2014.
[3] Huang, Zhicong, et al. "Cheetah: Lean and fast secure {two-party} deep neural network inference." 31st USENIX Security Symposium (USENIX Security 22). 2022. | Summary: The paper introduces Penguin, a novel HE-based ciphertext packing technique for accelerating GCN inference on encrypted graph data while ensuring data privacy. By exploiting the unique computation pattern of GCN layers, Penguin reduces computation and memory overhead associated with HE operations. The technique achieves significant speedup and reduction in computational memory overhead compared to state-of-the-art solutions. This work is the first to address the protection of both graph structure and features in accelerating HE-GCN inference on encrypted data.
Strengths: Introduces Penguin, a novel HE-based ciphertext packing technique for accelerating GCN inference on encrypted graph data.
Exploits the computation pattern of GCN layers to reduce computation and memory overhead associated with HE operations.
Provides theoretical analysis and experimental validation to demonstrate the speedup achieved by Penguin.
Achieves significant speedup and reduction in computational memory overhead compared to state-of-the-art solutions.
Addresses the protection of both graph structure and features in HE-GCN inference on encrypted data.
Weaknesses: It would be beneficial to provide more detailed comparisons with existing approaches to highlight the advantages of Penguin.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: -
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper could benefit from further discussion on the limitations and potential future directions of the proposed technique.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Weakness- More comparisons with other existing works:**
Thanks for your comments. In addition to the baselines E2DM[3], uSCORE[4], We have conducted additional experiments and reported the results comparisons with other existing relevant approaches (e.g. Gazelle [1] and HElayers [2]) in the following Table. In particular, HElayers [2] , a state-of-the-art HE Packing method suggested by Reviewer pXg9, Gazelle [1] is another work aiming to address the single matrix-multiplication problem in the general FC layer, suggested by Reviewer idN7. As Table 1 shows, our Penguin still achieves the best latency performance because it takes advantage of the unique computation pattern during the HE-GCN inference to significantly reduce the computation and memory overhead associated with HE operations. Following is the table showing the comparison results:
**Table 1**
| | Cora | Citeseer | PubMed |
|---|---|---|---|
| Gazelle| 3832.36s| 4727.94s| 158655.54s|
| HElayers | 2102.47s| 3044.58s| 103283.56s|
| Penguin(32,128)+IA| 660.67s| 928.1s| 30522.43s|
References:
[1]Juvekar, Chiraag,Vinod Vaikuntanathan, and Anantha Chandrakasan. "{GAZELLE}: A low latency framework for secure neural network inference." 27th USENIX Security Symposium (USENIX Security 18). 2018.
[2]Aharoni et al., "HElayers: A tile tensors framework for large neural networks on encrypted data, " PoPETs 2023.
[3] Jiang, Xiaoqian, et al. "Secure outsourced matrix computation and application to neural networks." Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 2018.
[4] Huang, Zhicong, et al. "More efficient secure matrix multiplication for unbalanced recommender systems." IEEE Transactions on Dependable and Secure Computing (2021). | Summary: This paper proposed an efficient data-packing method for cryptographically-secure inference on GCN, where the feature matrix and adjacency matrix are encrypted using homomorphic encryption (HE). The problem statement is interesting as the GCNs typically exhibit a significant sparsity level, increasing the number of rotations and, consequently, causing an increase in the HE latency.
Strengths: 1. The proposed solution is *novel* for dealing with rotation inefficiency in HE, which stems from the high degree of sparsity in GCN.
2. The presentation of the proposed solution is excellent, and the paper provides clear and detailed information about the experimental methodology, including the specific parameters of HE.
Weaknesses:
**Comparison with SOTA data packing method**
Table 3 in the paper does not compare the proposed solution and the state-of-the-art HE-packing method [1]. It is worth noting that the HELayer [1] incorporates a packing optimizer that selects the most efficient packing method to optimize latency and memory usage in HE convolution operations. Conducting a study on the suitability of the HELayer packing for graph convolution and their comparison with the proposed solution can provide valuable insights.
**Lack of experiments for showing the impact of sparsity on the efficacy of the proposed packing method**
An experimental study and discussion on the impact of sparsity on the effectiveness of existing/state-of-the-art HE packing methods versus the proposed methods in GCNs would provide valuable insights.
**CryptoGCN + Penguin**
Including an experimental analysis in the paper would have been beneficial to understand how the proposed approach in CryptoGCN (NeurIPS'22) for reducing the multiplicative depth can further enhance the efficiency of the proposed HE packing method.
1. Aharoni et al., "HElayers: A tile tensors framework for large neural networks on encrypted data, " PoPETs 2023.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Can we employ the rotation-free HE implementation [1] for faster private inference on GCNs? How does the sparsity in GCN impact the solution proposed for making HE rotation free [1]?
1. Huang et al., "Cheetah: Lean and fast secure two-party deep neural network inference," USENIX Security 2022.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Not addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Response to Weakness 1-Comparison with SOTA data packing method:**
Thanks for your constructive comments. We have conducted the experiments and reported the comparison results with HElayers in Table 1 below. As Table 1 shows, our solution–Penguin beats HElayers consistently across the datasets because: 1) Principally, HElayers [2] focuses on the computation of a fully-connected layer which is a one-direction (single) matrix multiplication. However, GCN inference has different bottlenecked computation patterns, e.g. two-way matrix multiplication (AXW) in each layer. Our proposed method targets an optimization problem for a two-way matrix-matrix multiplication (adjacency matrix multiplication and fully-connected layer weight matrix multiplication) and comes up with the best matrix blocking size with theoretical guarantee. 2) In HElayers, their matrix-multiplication method has the same computation complexity as E2DM [3] (in section 8.1 of [2]) and also requires a series of fixed square sub-blocks matrices to replace the original matrix multiplication. Thus, similar to E2DM (a benchmark we compared in Table 3 in our paper), the proposed matrix-multiplication method in [2] cannot handle the problem of the wasted slots. Therefore, as expected in Table 1, our two-way parallel packing can achieve much better HE-GCN inference than that of HElayers. We will incorporate such discussions in the experiments based on the reviewer’s advice.
**Table 1**
| | Cora | Citeseer | PubMed |
|---|---|---|---|
| HElayers | 2102.47s| 3044.58s| 103283.56s|
| Penguin(32,128)+IA| 660.67s| 928.1s| 30522.43s|
**2. Response to Weakness 2-Lack of experiments for showing the impact of sparsity on the efficacy of the proposed packing method**
We appreciate your question regarding the impact of sparsity on the efficacy of the packing method. However, we would like to clarify that in this work, we assume the adjacency matrices (A) are encrypted and the actual element value in the matrix could not be seen by the server. In other words, both adjacency matrix A and feature matrix X are encrypted, which is different from that of CryptoGCN-adjacency matrix A is a plaintext matrix whose sparsity can be exploited along with ciphertext packing for speedup. Thus, we can not leverage the sparsity of the adjacency matrix to skip redundant HE operations like CryptoGCN does.
**3. Response to Weakness 3-CryptoGCN + Penguin**
Thanks for your suggestion. Actually we have tried to reduce the activation layers of our evaluated GNN models for multiplicative depth reduction and the corresponding accuracy and latency results are reported in Table 2 and 3 below respectively. As Table 2 shows, the accuracy drops when pruning 2 activations, especially for the Citeseer dataset. If we prune two activation layers, the saved two levels would reduce the requirement of encryption parameter ciphertext modulus Q from 218 to 158. However, this could not allow us to change the polynomial degree from 8192 to 4096 like CryptoGCN because we still have to use N=8192 to maintain at least 128 bits security level for current Q [1]. As Table 3 illustrates, it can still achieve latency reduction but the improvement is not as significant as that of CryptoGCN despite the more prominent accuracy drop.
**Table 2**
| | Cora | Citeseer | PubMed |
|---|---|---|---|
| No activation pruning | 0.974 | 0.747 | 0.858 |
| With 2 activations pruned| 0.958 | 0.659 | 0.855 |
**Table 3**
| | Cora | Citeseer | PubMed |
|---|---|---|---|
| Q=218 | 660.67s| 928.1s| 30522.43s|
| Q=188 | 453.56s | 642.4s| 20855.38s|
| Q=158 | 353.07s| 501.51s| 16215.66s|
**4. Response to Question 1-Rotation-free approaches**
Thanks for your interesting question. The rotation-free implementation by [1] is in a MPC+HE setting. It needs the client to do decryption and re-encryption for completing the computation process to achieve the rotation-free. In this way, the computation latency on the server can be low. However, this method comes at the cost of communication between server and client (one-layer computation with one-time communication). Also, the client needs to frequently decrypt and then re-encrypt the intermediate results sent by the server. In our work, we assume a HE without-client-aid setting and do not require the client to interact with the server during the inference. The setting and application scenario are different from that of MPC + HE. In our setting, the HE computation at the server, instead of communication, becomes the bottleneck. Therefore, we think the two approaches represent different directions to achieve the privacy-preserving machine learning applicable to different scenarios, and both solutions have pros and cons. In our view, it would be interesting to explore such a rotation-free solution for private GCN inference in future work.
In regard to the sparsity’s impact on rotation-free solution, our understanding is that if the assumption for the adjacency matrix is in non-encrypted status, we think it is still possible to use sparsity skip some operations, like two polynomial multiplications. However, in our work here, we would like to clarify that we cannot leverage the sparsity to skip HE operations as the adjacency matrix is encrypted.
Reference:
[1] Laine, Kim. "Simple encrypted arithmetic library 2.3. 1." Microsoft Research https://www. microsoft. com/en-us/research/uploads/prod/2017/11/sealmanual-2-3-1. pdf (2017).
[2] Aharoni et al., "HElayers: A tile tensors framework for large neural networks on encrypted data, " PoPETs 2023.
[3] Jiang, Xiaoqian, et al. "Secure outsourced matrix computation and application to neural networks." Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 2018.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment:
Thank you for the detailed rebuttal. Given a tight timeline, I appreciate Auhtor's effort in presenting additional experimental data, especially a comparison with HELayers, and discussing the shortcomings of the rotation-free HE implementation. I would encourage the authors to include these data and discussions in the updated version of the paper. Based on the rebuttal, I increased the score to Accept.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer pXg9
Comment: We sincerely thank for your very helpful comments and strong support on our work. We will incorporate the discussion in the paper as you suggested. Thanks again! | Summary: The paper proposes a framework for optimizing latency in secure inference of GNNs. The authors propose a CKKS based packing scheme that is tailored for the structure of operations applied in Graph Convolution Networks. This appears to be the first work that considers both adjacency matrix and the feature matrix as private.
Strengths: 1. The optimizations for parallel packing introduced in this work are well motivated and justified with experimental data.
2. The technique provides a significant latency improvement over existing matrix multiplication techniques empirically.
3. I found the overall in-depth analysis of GCN computation and utilizing it for optimizing packing and reducing wasted operations with interleaved assembling to be well designed.
Weaknesses: 1. I wonder if a fair evaluation would include cryptoGCNs with encrypted adjacency matrix.
2. The latency is extremely high for practical purposes, however, there isn't much work in this specific problem therefore it is difficult to understand the difficulty of the task.
3. How does this technique compare with SecGNN[1]? (Ignoring the training aspect)
References -
1. Wang, Songlei, Yifeng Zheng, and Xiaohua Jia. "SecGNN: Privacy-preserving graph neural network training and inference as a cloud service." IEEE Transactions on Services Computing (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I could not find any discussion on limitations by the authors even though they claim it in the submission form.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Response to Weakness 1-if a fair evaluation would include CryptoGCN:**
Thanks for your comments. Our experiments have included a fair comparison with CryptoGCN. For CryptoGCN, the key idea is to use Adjacency Matrix Aware (AMA) ciphertext encoding technique, followed by the patterned sparse matrix partitioning to take advantage of the sparsity of unencrypted adjacency matrix A, so as to reduce the costly HE operations in GCN.
While deploying AMA encoding technique is still applicable when both graph node features and structure are encrypted, we would like to emphasize that the patterned sparse matrix partitioning technique is not applicable since here the adjacency matrix A is encrypted and the sparsity cannot be exploited. Therefore, to make our work comparable with CryptoGCN under the same setting (both encrypted features and adjacency matrix), in our experiments, we implemented CrytoGCN’s AMA encoding format as the node-wise format which encodes the features from the same node into one ciphertext (See Table 2, Cora-Penguin(1433,1), Citeseer-Penguin(3703,1), and PubMed-Penguin(19717,1)). As Table 2 shows, the performance of our method (Penguin 32,128) is much better than those with the AMA encoding from CryptoGCN. The reason is twofold: 1) encrypted adjacency matrix does not offer the public sparsity information that can be exploited by CryptoGCN; 2) two-dimensional optimization (feature-node AXW) instead of CryptoGCN’s AMA encoding focusing on one-direction optimization (adjacency matrix multiplication AX) only.
**2. Response to Weakness 2-concern about the difficulty about our research problem:**
Homomorphic Encryption (HE) is a promising technology to realize the PPML inference. Also the well-known challenge of HE is the large computation and memory (thus latency) overhead especially facing machine learning.This leads to the prolonged inference latency which greatly hinders its practicality at the current stage, as illustrated in many prior HE-based PPML for Deep CNN models, e.g. ResNet-20 [4,5]. Compared with HE-based CNNs, HE-based GCN is an emerging area that has been far less explored so far. To the best of our knowledge, the SOTA work for HE-based GCN inference-CryptoGCN requires an inference latency 4273.89s for 25✖25 adjacency matrix due to the unique and expensive matrix-matrix multiplications in the encryption domain. Compared with CryptoGCN, our work involves much larger adjacency matrices (e.g. 19717✖19717), of which the computation and latency overhead is expected to be much higher. While our work has not yet reached the latency requirement of practical applications, it represents an early attempt along this challenging direction. We believe there is room for further performance improvement from the aspects of algorithm and hardware. For example, with the advancement of dedicated HE hardware accelerators [2,3], the HE operations latencies could be further reduced.
**3. Response to Weakness 3-comparison with SecGNN:**
SecGNN is based on a technique called additive secret-sharing in a three-servers setting where the overhead mainly stems from the communication between different servers, as it involves the interaction among servers. However, for our HE without-client-aid setting (no communication between server and client during inference), the overhead is mainly from the HE operations (e.g. rotation and ciphertext-ciphertext multiplication) for completing the required computations on the server. These two methods represent two directions for achieving PPML under different application scenarios or settings, of which the former is bottlenecked by communication while the latter is bottlenecked by computation. Therefore, we believe it is difficult to directly compare them.
Reference:
[1] Lee, Eunsang, et al. "Low-complexity deep convolutional neural networks on fully homomorphic encryption using multiplexed parallel convolutions." International Conference on Machine Learning. PMLR, 2022.
[2] Samardzic, Nikola, et al. "F1: A fast and programmable accelerator for fully homomorphic encryption." MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture. 2021.
[3] Kim, Sangpyo, et al. "Bts: An accelerator for bootstrappable fully homomorphic encryption." Proceedings of the 49th Annual International Symposium on Computer Architecture. 2022.
[4] Lee, Eunsang, et al. "Low-complexity deep convolutional neural networks on fully homomorphic encryption using multiplexed parallel convolutions." International Conference on Machine Learning. PMLR, 2022.
[5] Ran, Ran, et al. "SpENCNN: Orchestrating Encoding and Sparsity for Fast Homomorphically Encrypted Neural Network Inference." International Conference on Machine Learning. PMLR, 2023. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Flat Seeking Bayesian Neural Networks | Accept (poster) | Summary: This paper proposes modifying the loss used for Bayesian neural networks (BNNs) to take into account the sharpness/flatness of the loss with respect to the model parameters. Theory, based on that of sharpness-aware minimization (SAM), is developed to propose this loss modification. Making BNNs sharpness-aware led to increased accuracy, decreased negative log-likelihood (NLL), and decreased expected calibration error (ECE) for large neural network models across CIFAR-10, CIFAR-100, and ImageNet.
Strengths: This paper does an excellent job of combining sharpness-aware and BNN methods. The theory for this is given and the theory is applied to several BNN method, namely stochastic weight averaging Gaussian (SWAG), stochastic gradient Langevin dynamics (SGLD), Monte Carlo (MC) dropout, and deep ensembles, for CIFAR-10 and CIFAR-100. These experiments show reasonable improvements.
Weaknesses: 1. The difference between the optimization problems used in the different Bayesian methods was not clear enough. The main optimization problem discussed was for variational inference. To my knowledge, however, SWAG and SGLD typically use a procedure that is based on the gradient of the log of the posterior.
2. The incomplete ImageNet experiment. Only SWAG and FSWAG are evaluated for ImageNet, versus all of the methods that were applied to CIFAR-10 and CIFAR-100.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Are different optimization problems used for some of the studied methods? If so, please add those details to Section 3.
2. Is it possible to have results on ImageNet for more of the studied methods?
3. I'd recommend putting the number of repetitions used to create the confidence intervals in the captions of the tables. I see that, in the supplementary material, it was three, but I think that this information, along with the number of samples used for the methods, should be in the main text.
4. How do you think that sharpness-aware methods would interact with Laplace approximation methods, another popular BNN method? (This is not a necessary addition to the paper, in my opinion. However, I am interested in your answer to this.)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations and societal impact are not discussed. One limitation that could be added to the conclusion is that the proposed Gaussian variational approach only uses a diagonal covariance for the posterior approximation. While this is standard, it is a limitation that, by mentioning it, could show areas of potential improvement for the proposed variational method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We sincerely appreciate your constructive comments. We are dedicated to addressing all the questions listed below to the best of our capabilities.
**Is it possible to have results on ImageNet for more of the studied methods?**
We acknowledged the limited number of experiments on ImageNetWe and would like to report the additional results of SGLD and F-SGLD in Table 1 in the attached pdf. It's important to emphasize that the process of training models on the ImageNet dataset consumes significantly more time compared to other datasets, making it challenging to expand experiments in a limited rebuttal period. However, we wish to maximize our efforts and do as much as possible to add to the revised version.
**I’d recommend putting the number of repetitions used to create the confidence intervals in the captions of the tables**
We will carefully revise the paper incorporating your suggestions to enhance its quality and comprehensiveness.
**How do you think that sharpness-aware methods would interact with Laplace approximation methods, another popular BNN method?**
Thanks for this interesting but challenging question. A possible solution is to start from our proposed sharpness-aware posterior $p^{SA}$ and then perform second-order Taylor expansion for $\text{ln } p^{SA}(\theta \mid \mathcal{D})$. The $\theta_{MAP}$ is the SAM solution and we can resort to some techniques mentioned in the paper [1] to approximate the Hessian matrix at $\theta_{MAP}$.
We can even go further with more complicated formulations by taking account of the derivative of $s(\theta) = \text{argmax}_{\theta' \in B\_{\rho(\theta)}} \mathcal{L}\_{S(\theta')}$ in the derivations, which potentially leads to a Hessian matrix.
[1] Daxberger, E. et al. : Laplace redux-effortless bayesian deep learning, NeurIPS21.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional ImageNet results.
I'd mention SWAG and SGLD in Section 3.1 to add a link in the text between your proposed method and those methods.
Thank you for adding the number of samples used to compute the confidence intervals to the table.
Thank you for the information on the interaction between your sharpness-aware method and the Laplace approximation.
---
Reply to Comment 1.1.1:
Title: Mention SWAG and SGLD in Section 3.1 to add a link in the text between your proposed method and those methods
Comment: Thank you for your suggestion. We believe that mentioning it would enhance the clarity of our paper's setting, making it more accessible for readers to follow. | Summary: This paper introduces a method called Sharpness-Aware Bayesian Neural Networks (SABNN) that aims to improve generalization performance on test datasets. The key idea is to replace the negative empirical loss function with the negative SAM (Sharpness-Aware Minimization) [1] loss function, enabling consideration of the flatness of the loss surface. The authors provide both theoretical analysis and empirical evidence to demonstrate that SABNN achieves better generalization compared to existing methods when evaluated on test datasets.
References
[1] Foret, P., A. Kleiner, H. Mobahi, et al. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations. 2021.
Strengths: Originality
- The paper presents a theoretical demonstration of the robustness of sharpness-aware posterior inference for the entire distribution of the true dataset.
- The authors propose a straightforward variational approach that utilizes a Gaussian approximate posterior for efficient and straightforward inference.
Clarity
- The paper is well-written and provides clear explanations, making it easy to understand and follow.
Weaknesses: Method
- The method described in the paper involves a straightforward and nearly direct modification of the likelihood function by replacing the negative empirical loss with the negative SAM loss function.
Experiment
- In section 4.3 of the paper, the authors mention that the sharpness scores are provided in the supplementary material. However, I was unable to locate the results in the supplementary material.
- The paper does not discuss any potential additional computational costs associated with the proposed method in comparison to the baselines.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Experiment
- It is requested to provide the sharpness scores and the largest Hessian eigenvalues for the entire dataset, as well as for the SWAG and F-SWAG models. I think the eigenvalues can be calculated for the mean $\mu$ of each method.
- There is a desire for a discussion regarding potential additional computational costs associated with the proposed method in comparison to the baselines.
- The results of the Deep Ensemble, which is an ensemble of independently trained models without KL in Bayesian Deep Ensemble and with L2 regularization loss, trained with SAM loss, are also requested. The aim is to assess whether the proposed SABNN method is practically beneficial or not.
I am willing to increase my score after considering the experiment results provided in the rebuttal.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitation
- This paper did not address the limitation of the proposed method.
- One possible limitation can be additional computational costs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. Based on your suggestion, we conduct some more experiments (report in the attached pdf file) and hope that we can address some of your points as presented below.
**Largest Hessian eigenvalue**
We report the log scale of the largest eigenvalue of the Hessian matrix over several methods applying to WideResNet28x10 on CIFAR100, and the ratio of the largest and fifth eigenvalue as shown in Table 4 in the pdf attached, which evidently indicates that our method updates models to minima having lower curvature.
**The sharpness scores**
We report the sharpness scores of the PreResNet-164 network for SWAG, and F-SWAG training on CIFAR100 in Figure 1 of the attached pdf. This shows that our proposed approach gains better sharpness scores than the baseline.
**Computational cost**
We further report the computational cost of several experiments in Tables 1 and 4 in the attached pdf. Similar to SAM, our flat-seeking method involves the computation of gradients twice. The initial computation is to derive the perturbed model $\theta'$ and the subsequent one is to update the model. As a result, the training time is nearly double in comparison to non-flat counterparts. We will explicitly discuss this in the limitation section.
**Additional experiments on without the regularization term for the deep ensemble**
We present the result of training Deep-ensemble with SAM following the formula for the flat version (Section 2 supplementary) _without KL (or L2 regularisation)_ in Table 2 of the attached pdf. Each experiment is performed three times and the mean and standard deviation are reported. Based on the result, the flat version *with KL divergence* performs better than the one *without KL divergence*.
---
Rebuttal Comment 1.1:
Title: Please kindly let us know if our additional experiments address your concerns
Comment: Dear Reviewer,
Could you please look at our rebuttal and kindly let us know if the additional experiments address your concerns? Really appreciate your time and effort to review our paper.
Regards,
Authors of the paper 6164 | Summary: This paper extends SAM -- sharpness aware minimization framework of Foret et al, who seeks parameters of Neural Networks in the regimes of flat loss landscape. The current understanding is that a model that exhibits a flat loss landscape exhibits better generalization performance. Particularly in this paper, such methods are extended to variational inference for neural networks, where the posterior (and hence neural network training) is obtained through an objective function that seeks flat local minima. Experiments are conducted, which were designed to incorporate their objective function into the current Bayesian Neural Networks. The results show consistent improvements over the vanilla counterparts.
Strengths: The contribution of the paper seems interesting. The framework is utilizing PAC-Bayes like theory, where an upper bound to the true loss function is obtained through a careful balance between empirical loss and certain KL divergence. The later regularizes the Bayesian neural network overfitting. It is interesting to see, how variational inference fits the true posterior, which is in turn modified according to the flat local minima seeking objectives. As SAM is one of the popular methods to understand generalization (while being simple), such extensions to Bayesian Neural Networks might be meaningful to the community.
Weaknesses: I am not an expert in this specific area of combining geometric insights into variational inference. I lean to accept the paper due to my inability to fully grasp this concept. I hope that other reviewers can provide more meaningful feedback for improvements here.
For the weakness:
- presentation could perhaps improve a bit more.
1. in the abstract, I do not see the logical connection between generic explanations of Bayesian Neural Networks, and generalization. The switch was to me, a bit disconnected and sounded a bit sudden. In the introduction, I also did not see much the role of the 2nd paragraph. Indeed, there have been many variation inference approaches, but I could not see the connection between the expressivity of the posterior, and the approaches of seeking flatness in variational inference.
2. in the theoretic part, I see several equations that repeat also in other papers, e.g., the upper bound of true loss with empirical and KL divergence, how the true posterior provides tight generalization bound, etc. While I appreciate the author's efforts, especially in the appendix for many derivations, it may also make sense to provide citations more in the corresponding texts, so that the readers may be able to learn also through other papers.
3. lines 170-180 should be extended a bit more. I could not understand the connection between each question on page 5 and also missed insights between each derivation.
4. It might be helpful to include an algorithm part, so that the readers may better comprehend how the method works (and hopefully also why).
- empirical improvements seem rather minor, and it was not clear to me, how these experiments demonstrate generalization.
While I appreciate the ample empirical evidence provided by the paper, most of the improvements in terms of accuracy are less than 1 percent. This may raise a question if the experimental design here is adequate to show the generalization performance. Table 3, 4, and 5 need error bars, which is requirements in Bayesian machine learning works.
On how to test generalization, I think the paper could adapt few-shot settings or experiments from meta-learning. Therein, little training data is provided, and one may be able to test better how the model still generalizes. I am not sure, how testing on vanilla cifar10, cifar100 or imagenet classification score is an indication of generalization performance.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: --
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: I could not find the limitation section in the main body of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive comments. We carefully address all the questions listed below to the best of our capabilities and improve our paper based on that.
**Presentation could perhaps improve a bit more.**
Thank you for pointing this out. We will enhance the clarity and motivation behind the fusion of Bayesian Neural Networks and the concept of generalization in the revised version.
**Provide citations more in the corresponding texts, so that the readers may be able to learn also through other papers.**
Thank you for recognizing our efforts in the theoretical derivations. We will definitely provide citations for equations that have also been explored in other works. This will offer readers a broader context for understanding our extensions.
**I could not understand the connection between each question on page 5 and also missed insights between each derivation.**
We apologize for any confusion caused. We will begin by presenting the proof of Theorem 3.2, which extends the concept of sharpness-aware minimization to general and empirical losses over a distribution Q.
In the following derivation, we modify the upper-bound of the inequality to explicitly represent _the sharpness over the distribution Q in the model space_. This modification aims to offer readers a clearer understanding of the concept of sharpness-aware minimization over Q.
Notably, the upper-bound presented in Equation (5) is directly influenced by the insights from Theorem 3.2.
In the revised version, we will ensure that these connections are articulated more comprehensively to enhance clarity.
**Include an algorithm part**
We appreciate your suggestion. While we view our approach as a general framework applicable to various algorithms and settings, we recognize the importance of including specific algorithms for better comprehension. In the revised version, we will present algorithms for different settings to enhance clarity.
**Empirical improvements seem rather minor, and it was not clear to me, how these experiments demonstrate generalization. Table 3, 4, and 5 need error bars, which is requirements in Bayesian machine learning works.**
We understand the importance of the error bar for each experiment in the BNN field and try our best to provide as much as possible in the rebuttal time. Due to the limited rebuttal time, we can only report the error bar for Gaussian posterior experiments as shown in Table 3 in the attached pdf. We will try our best to report the error bars for our approaches and the baselines in the revised version. Meanwhile, we are able to conduct one more setting of SGLD and F-SGLD on ImageNet (Table 1 in attached pdf) and also report the largest eigenvalue of the Hessian matrix over several methods, and the ratio of the largest and fifth eigenvalue (Table 4 in attached pdf) as additional evidence that our method updates models to minima having lower curvature (based on the connection between the geometry of the loss landscape
and generalization, which has been studied extensively [1]).
[1] Gintare Karolina Dziugaite and Daniel M Roy. Computing non vacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008, 2017.
**Adapt few-shot settings or experiments from meta-learning.**
Thank you for your suggestion on the evaluation of generation under alternate settings. They are directly related to generalization ability, hence potentially applying our approaches, however, we leave it for future works. Currently, our paper mainly focuses on experiments on the Bayesian Neural Network and the effectiveness of sharpness aware posterior. The effectiveness is demonstrated through the consistence improvement of accuracy, which is a reasonable measurement of generalization ability. In Table 5 in the main paper, we also evaluate our proposed approach against the non-flat versions in an out-of-distribution setting. Particularly, the original images added different types of corruption and noise. The consistent improvement in both overall accuracy and ECE is additional evidence of the generalization ability of our proposed methods.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: I would like to thank the authors for the efforts. I have read the rebuttal and comments from other reviewers.
I decided to stand with the score of borderline. More thoughtful experiments, which better showcase the strength of the methods could make the paper more relevant to the community.
---
Reply to Comment 1.1.1:
Comment: We respect your decision. Thank you for taking the time to review our paper and hope we have a chance to improve it in the revised version based on your suggestion. | Summary: The paper proposes a new approach to posterior inference for Bayesian neural networks that takes into account the sharpness/flatness of deep learning models, leading to better generalisation ability. The authors introduce the Sharpness-Aware Posterior (SA-Posterior), which allows the sampling of a set of flat models that improve model generalisation. The paper presents a theoretical framework for the SA-Posterior, including a Bayesian framework and a variational inference approach. The authors demonstrate the effectiveness of their approach through experiments on various datasets, showing that it outperforms existing methods on all metrics of interest.
Strengths: - The paper presents a novel approach to posterior inference for Bayesian neural networks.
- The theoretical framework for the SA-Posterior is developed, including a Bayesian setting and a variational inference approach.
- The effectiveness of the SA-Posterior is demonstrated through experiments on various datasets.
- The paper provides insights into the importance of sharpness/flatness in deep learning models.
- The paper contributes to the growing body of research on improving the generalisability of deep learning models.
Weaknesses: - The experiments could be extended to include more datasets and models.
- The paper does not discuss the computational complexity of the SA-Posterior approach.
- The paper does not compare the SA-Posterior approach with other recent approaches to improve the generalisation ability of deep learning models.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - How does the SA-Posterior approach compare to other recent approaches to improve generalisation in deep learning models?
- How does the computational complexity of the SA-Posterior approach compare with other approaches to posterior inference for Bayesian neural networks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - Investigate the potential of the SA posterior approach for other types of neural networks beyond Bayesian neural networks.
- Investigate the computational complexity of the SA-Posterior approach and develop methods to reduce it.
- Investigate the potential of the SA-Posterior approach for transfer learning and domain adaptation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive comments. We carefully address all the questions listed below to the best of our capabilities and improve our paper based on that.
**The experiments could be extended to include more datasets and models.**
Thanks for this comment. In this paper, we conducted the experiments on three datasets including Cifar10, Cifar100, and ImageNet, and compared with 9 baselines including MC-Dropout, Deep-ens, SGLD, SWAG, SWAG-Diag, SGVB, SGVB-LRT, SAM, and bSAM. We believe this current amount of experiments satisfies the standard of our field.
Encouraged by your suggestion, we conduct more experiments on ImageNet for more baselines as reported in Table 1 in the attached pdf. Additionally, we report the largest eigenvalue of the Hessian matrix over several methods applying to WideResNet28x10 on CIFAR100, and the ratio of the largest and fifth eigenvalue (Table 4 in the attached pdf) as evidence that our method updates models to minima having lower curvature.
**The paper does not discuss the computational complexity of the SA-Posterior approach**
We further report the computational cost of several experiments in Tables 1 and 4 in the attached pdf. Similar to SAM, our flat-seeking method involves the computation of gradients twice. The initial computation is to derive the perturbed model $\theta'$ and the subsequent one is to update the model. As a result, the training time is nearly double in comparison to non-flat counterparts. We will explicitly discuss this in the limitation section.
**The paper does not compare the SA-Posterior approach with other recent approaches to improve the generalisation ability of deep learning models**
We provide the comparison of our proposed methods with _bSAM_ and _SAM_ (cf. Table 6 in supplementary), which are recent works using sharpness-aware minimization for improving the generalization ability of deep nets. The results demonstrate that our flat version of BNN outperforms bSAM in the most of metric scores.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response.
My concerns have been addressed and I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We respect your decision. Thank you for taking the time to review our paper and hope we have a chance to improve it in the revised version based on your suggestion. | Rebuttal 1:
Rebuttal: We appreciate the reviewers' constructive comments. We would like to report additional experiments on both ImageNet and CIFAR datasets, the computation cost, the sharpness scores, and the eigenvalues of the Hessian matrix in the attached pdf.
Pdf: /pdf/40ab1969ffae241a5f360a8366c22d635e281059.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduce theories in Bayesian settings and propose variational inference for the sharpness aware posterior in the context of Bayesian Neural Network. The proposed approach is incorporated with existing state of the art Bayesian Neural Networks and experiments were conducted to show the effectiveness of sharpness aware posterior. The results indicate that incorporating sharpness aware posterior methodolody outperforms baseline in terms of ensemble accuracy , expected callibration error and negative likelihood. Also, experimensts show that models from poroposed approach are less sensitive to noise and have improved generalization ability
Strengths: 1) The paper proposes to approximate posterior of Bayesian Neural Networks with sharpness aware posterior.
2) The proposed approach can easily be incorporated to already existing Bayesian Neural Networks
3) The experiments use state-of-the-art settings
4) The paper outperforms the baseline methods
Weaknesses: 1) The experments show that proposed approach most of teh times outperforms baseline methods , however, improvement is not that significant.
2) The experimenst are only for image-classification task
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1) What is the computational head of the proposed approach?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: It would be great to add compared to baseline approaches , what is the computational overhead of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We sincerely appreciate your constructive comments. We carefully address your question to the best of our capabilities and improve our paper based on that.
**Computational head**
We further report the computational cost of several experiments in Tables 1 and 4 in the attached pdf. Similar to SAM, our flat-seeking method involves the computation of gradients twice. The initial computation is to derive the perturbed model $\theta'$ and the subsequent one is to update the model. As a result, the training time is nearly double in comparison to non-flat counterparts. We will explicitly discuss this in the limitation section.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the many clarifications.
---
Reply to Comment 1.1.1:
Comment: Thank you again for taking the time to review our paper and hope our responses effectively address your concerns. | null | null | null | null | null | null |
Practical and Asymptotically Exact Conditional Sampling in Diffusion Models | Accept (poster) | Summary: This paper focuses on solving inverse problems using diffusion based probabilistic models. More precisely, it is only assumed that one has access to a diffusion model for the prior distribution and a likelihood, so that no additional training is needed. The aim of the present paper is to provide an asymptotically exact method.
The method builds up on the idea of reconstruction guidance which aims at approximating the score involved in the diffused posterior, $\nabla \log p_t(x_t | y)$ which is equal to $\nabla \log p(y|x_t) + \nabla \log p_t(x_t)$ where the second term is given by the prior diffusion model. The first term is however intractable but can be approximated by noting that $p(y | x_t) = \int p(y|x_0) p(x_0 | x_t) dx_0$. This integral is then approximated by simply taking the mean a posteriori i.e. $p(y | x_t) \approx p(y | E(x_0 | x_t))$ which itself can be approximated using Tweedie's formula and the prior score.
The authors use this idea to obtain particle approximations of the marginal of the backward diffusion of the posterior. They do so using twisting functions. This allows them to define a principled SMC samplers for the target distribution. By standard SMC results, the obtained particle approximation of the posterior converges to the posterior of the diffusion model. The extension to Riemannian manifolds is also provided.
Strengths: - The contribution of this paper is original. Unlike other methods trying to solve inverse problems, this method is theoretically grounded and is guaranteed to be reliable in relatively complex problems. This is for example not the case of DPS [1] which fails to sample the posterior even in this simplest Bayesian settings.
- I also acknowledge that the paper is well written, I enjoyed reading it.
- The numerical experiments are sound and the application to protein design is interesting.
Weaknesses: - While I find the approximations of the optimal twisting functions reasonable, I am not sure if this is the best idea if we take into account the computational cost. Indeed, sampling from the transition requires computing the gradient of the score network with respect to the input. In large scale applications (like images with 3x256x256 dimensions) this severly limits the number of particles that can be used, since the gradient will be computed for each particle. At the end of the day, the number of particles used is very important even if the transition kernels and weights are near optimal; if the posterior is highly multimodal then we inevitably need many particles in order to populate each mode. SMCDiff on the other hand should work better on these problems since one can use a larger number of particles given a fixed computational budget.
- The comparison with SMCDiff is in my opinion unfair. SMCDiff performs particle filtering only on the unobserved part of the state and does not require computing the gradient of the score network. Therefore the computational time and memory cost of TDS are much larger than SMCDiff. This is not taken into account in the numerical experiments. Furthermore, it seems that SMCDiff outperforms TDS in terms of ESS. I understand that SMCDiff relies on a stringent assumption that does not hold in practice, but in my opinion it is as reasonable as saying that the instrumental kernel and weights proposed for TDS are a good approximation. In short, i am not really convinced that TDS is better than SMCDiff for inpainting tasks. For noisy inverse problems of course the comparison is not relevant since SMCDiff is not designed for such problems.
- As far as I can tell, the impact of the dimension of the observation is not discussed. As is widely known in the SMC community, this makes the weights degenerate. I believe that a discussion on this matter should be added so that readers are not misled into thinking that in high dimensional problems this method will provide $K$ diversified samples from the posterior. It should be emphasized that this method works best on low dimensional problems.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: It is claimed in the appendix that the assumptions required to prove the convergence are mild. However, assuming that the ratio of the twisting function (assumption (b), line 588-589) is bounded is not a mild assumption since it doesn't hold in the simplest cases. It is claimed that this holds if the prediction $\hat{x}_0(x_t)$ is compactly supported but I am not sure how this is possible even if the data distribution is compactly supported. Indeed, the marginals of the forward process $q_t$ are the result of the convolution of the data distribution with a Gaussian kernel so that it cannot be compactly supported. Since $\hat{x}_0(x_t)$ contains the score, it cannot be compactly supported too. Next, assuming that the score is bounded is also not reasonable. Finally, assuming that $p(y; x^0)$ is bounded away from zero works but this does not hold for Gaussian linear inverse problems, which is quite unfortunate.
In short, both assumptions (b) and (c) are strong assumptions and it should be stated explicitely.
There is a typo in assumption (c); its not the gradient that should be continuous with bounded gradients.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and are glad they found the paper to be original, well-written and to have sound theoretical and numerical support. We believe that our new experiments and clarifications below thoroughly addresses the weaknesses noted.
**Computational cost.** Compared to other approaches, we believe the proposed algorithm can present a favorable trade-off between computational cost and accuracy. But we agree with the reviewer that the proposed algorithm is not the best idea when both the following hold: (1) repeated, fast generation at inference time is of primary concern and (2) designing and training an effective conditional model is an option. We summarize some comparisons both from our new experiments and submission:
* For the class-conditional ImageNet experiment described in our high-level response, we estimate the compute spent on training a noisy classifier required by Classifier Guidance was approximately 330 gpu hours ([they](https://github.com/openai/guided-diffusion/tree/main) report 300,000 training iterations with a batch size of 256, and in our hands one training iteration with a batch size of 8 takes 0.1244 GPU seconds). By contrast, generation of a sample using TDS with 16 particles requires 6 GPU minutes, 0.03% of the conditional training time providing comparable accuracy. Moreover, TDS does not require the engineering time to assemble a labeled dataset and design the time-dependent model. However, because inference time with classifier guidance is lower, the total compute cost would lower if many (e.g. > 3,300) samples were to be generated. The reviewer is correct that parallelizing across many particles is not as straightforward for ImageNet due to high dimensionality, but could in principle be scaled across multiple GPUs or computed in sequence (we used the latter approach for our experiments).
* Relative to SMCDiff, in our experiments the efficiency improvement more than compensates for the cost of each particle. The compute cost of TDS is roughly 2.2X higher than that of SMCDiff, but in Figure H we show that TDS with 2 particles outperforms SMCDiff with even 64 particles. In this situation, TDS is >10x faster and provides better results. The observation about effective sample size of SMCDiff relative to TDS is interesting. We are surprised by this result given the worse empirical performance of SMCDiff by classification accuracy and do not have a good explanation for why these metrics do not agree.
**Impact of dimension:** Our promising empirical results on MNIST (784 dimensional), protein design (\~600 dimensional, varying by test-case), CIFAR 10 (\~3K dimensional) and ImageNet (196K dimensional) suggest TDS can work well even on high dimensional problems.
Notably, in the ImageNet case, despite that the particles lack diversity in global-level features, there remain variations in local patches (see Figure 2).
Indeed these high-dimensional, empirical successes are not typical for SMC methods, which can have notoriously bad dimension dependence. We suspect this good performance in high dimensions owes to the quality of the proposals distributions obtained with our twisting functions.
**Suitability of the assumptions of Theorem 1:**
We thank the reviewer for their thoughtful engagement with the assumptions of our theorem. We will replace the word “mild” with a discussion of conditions, as we agree that our claim of “mildness” in the main text was overly blase. However, we disagree with the claim that our conditions “don’t hold in the simplest cases” and hope the points below will satisfy the reviewer and (once added to our revision) future readers.
* **Compact support of $\hat x_0(x_t)$:** We expect the range of $\hat x_0(x_t)$ to exist within some compact set for two reasons. First, it is common practice to truncate denoising predictions to within the range of the data; for example in image diffusion models by clipping values between the maximum and minimum pixel intensities values. Second, because $\hat x_0(x_t)$ is trained to approximate $E[x_0 | x_t]$ which exists within the convex hull of the population distribution; for example, the (mean centered) protein structures have bounded support because they contain a finite number of atoms whose distances are constrained by fixed bond lengths, and so we always observe denoising predictions within the bounds of some maximum size. We do not assume that the $x_t$’s for $t>0$ have compact support (which indeed is not satisfied in general).
* **Bounded score:** Our assumption is not that the (unconditional) score is bounded (which indeed would not be satisfied), but that the gradient of the log likelihood approximation $\nabla_{x_t} \log \tilde p(y|x_t)$ is bounded. As we noted on line 601, this is our “strongest” assumption but will be satisfied if $\hat x_0(x_t)$ and $p(y;x_0)$ are smooth in $x_t$ and $x_0$ respectively. Though this condition is difficult to check, we suspect this assumption will be satisfied for typical denoising networks and classification models when trained with regularization to encourage smoothness.
* **Applicability to linear inverse problems:** The reviewer is correct that assumption c does not hold in these cases, and we will make this more explicitly. We had separated the likelihood case into a separate first section (3.2) in which we presented our theorem, and described the inverse problems case subsequently for this reason. Generalizing our theory to cover this case as well is of interest for future work, but requires moving away from the standard SMC theory – although SMC methods are commonly used with weight functions that are not bounded (as in the inverse problems case) most existing theory requires this assumption.
We thank the reviewer for identifying our typo in our statement of assumption (c). We will correct it. Thank you very much again for your careful reading!
---
Rebuttal Comment 1.1:
Comment: I would like to thank the reviewers for their thoughtful and honest answer! It is quite surprising, and quite unintuitive for me, to see that $K = 2$ particles are enough already. My concerns are addressed and I have decided to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We are glad to have addressed your concerns. | Summary: The paper proposes an SMC algorithm to draw conditional samples form a diffusion model. Specifically, they wish to draw samples p(x0|y) given a diffusion model p(x0) and likelihood p(y|x0). Existing techniques to do so rely on expensive training of conditional diffusion models or heuristics which do not sample from the correct conditional distribution.
Strengths: - Theoretically well-founded method. The paper does a good job of describing the flaws of commonly used methods for conditional generation and proposes a technique to target the true conditional distribution without needing task-specific training.
- State-of-the-art results for protein motif scaffolding, which is a very reasonable application of the proposed exact targeting of conditional distributions
Weaknesses: - For some of the experiments (such as inpainting), a reasonable baseline to compare against would be that proposed in Section 3.1 of VDM (https://arxiv.org/abs/2204.03458). This performs conditional generation with an unconditional diffusion model using a heuristic that they show improves on naive guidance (although doesn't have theoretical guarantees like the proposed SMC method). A comparison against this would make it clearer when Twisted SMC is indeed the best option.
- Section 4.3 is hard to read and lacking in plots or concrete results. Adding something like Figure J from the appendix would make the results more convincing for readers who do not venture to the appendix to search for plots.
- How accurate is the inferred conditional distribution when less sampling steps are used? With progressive distillation or more modern ODE/SDE solvers it is now common to sample from diffusion models with tens of integration steps. The performance of SMC presumably degrades when less steps are used, since each individual step must be bigger. Can the authors comment on this or quantify this effect? E.g. how would Twisted SMC perform relative to the baselines if the number of diffusion steps used for motif-scaffolding was halved?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and are glad they found the paper to be theoretically well-founded, and that they appreciate the state-of-the-art results. We believe we thoroughly address the noted limitations in the below.
**VDM as a baseline:** As we noted in our high-level reply, the method we labeled as “guidance” exactly coincides with “reconstruction guidance” as proposed in VDM. We thank the reviewer for revealing that this term was unclear, and will be certain it is clarified in our revision.
**Section 4.3 results and readability:** We appreciate the suggestion to provide more concrete results and will move Figure J into the main text to accomplish this as suggested. Additionally, we will add a brief preamble paragraph to section 4.3 to summarize the evaluation and state-of-the-art results.
**Accuracy with fewer steps:** This is a great question. We comment first on empirics and then on methodological considerations. With 50 or 100 steps rather than 200 steps performance seems to degrade to varying extents depending on the particular motif (see Table below). The performance drop is statistically significant (by Fisher exact test, p<0.05) in only one of three cases (3IXT).
However, the interpretation of this result is complicated by the fact that changing the number steps (or analogously using a different ODE/SDE integrator) slightly modifies the sampling distribution of the unconditional model, and therefore also slightly modifies conditional distributions. Consequently, exactly quantifying the impact on accuracy is ill-posed because it is relative to a moving target. We expect however that, though one can apply TDS with any number of sampling steps, effective sample sizes may be worse when step-sizes are large. In fact, we conjecture that the KL between intermediate targets is linear in the step size, and therefore the number of particles needed could be exponential. However, we have yet to prove this result and leave it to future work. We additionally note that the performance of other conditional generation methods may also in some cases degrade when fewer steps are used.
*Table:* Number of motif-scaffolding successes (out of 50 runs) with different numbers of diffusion steps with 8 particles.
| Motif | 200 steps | 100 steps | 50 steps |
| ----------- | --------- | --------- | -------- |
| 1QJG | 3 / 50 | 1 / 50 | 1 / 50 |
| 3IXT | 48 / 50 | 43 / 50 | 42 / 40 |
| 5TRV_short | 20 / 50 | 9 / 50 | 8 / 50 |
These results above are from a preliminary exploration (conducted before submission) of the impact of the number of steps. In response to the reviewers comments we intend to rerun these experiments to more comprehensively understand the impact of the number of steps. | Summary: This paper proposes a practical approach for achieving asymptotically exact inference from diffusion models through exact conditional sampling in terms of Sequential Monte Carlo (SMC) . This discovers the connection between SMC and diffusion models, and one of the key feature is to approximate the optimal twisting functions by som tractable twisting alternatives. The proposed method is extended to tackle inpainting, inpainting with degrees of freedom problems, and make diffusion model compatible on Riemannian manifolds. The experiments involve synthetic diffusion models, class-conditional sample generation on MNIST dataset, and motif-scaffolding problem, which is relevant to protein design.
Strengths: 1. The inspiration of the paper is good, which tries to achieve exact conditional sampling from diffusion models by making connection to SMC, and extend to Riemannian diffusion models.
3. (More like an) Ablation study in the experiment portion to examine whether the proposed method works better.
Weaknesses: Based on understandings,
1. The novelty of this paper is not clear to me. The strong and close connection between diffusion models and the SMC technique is not crystal clear to me. Even though the reverse process with T diffusion steps from the diffusion models and sampling across T steps in terms of SMC could be related somehow, the significance of this connection is not apparent. The authors also point out that Classifier guidance[1] requires additional classifier training, but no computational cost and sample quality comparison is provided; Classifier-free guidance [2] requires classifier information as an additional input but again, none of the aforementioned result is presented and class information is still needed by the proposed TDS method (from my understanding but could be wrong). Equation 10, which is referred to as key insight in the paper, not been explained well, and actually has been proposed in [5] with even further clear explanation. The previous paper is not cited either. The title is also not informative enough and the emphasis on SMC is not evident.
2. Lack of real benchmark for the first two experiments, particularly the examination on the MNIST dataset. I am also concerned that the "Guidance" method performs significantly worse than the other four methods without any justification, and no pre-trained diffusion model is served as a benchmark here. Three experiments are all competing to themselves. It would be more convincing for the image synthesis experiments if they were conducted on more diverse and challenging datasets such as CIFAR-10, instead of solely relying on the MNIST dataset, which consists of grayscale images with only 10 classes.
3. Not sure why but the ``Related work`` is added in the appendix. MNIST class-conditional generation experiment part is in the paper, the inpainting is put in the appendix as well (which is mentioned in the ``abstract`` and ``conclusion``). I do not think it is pretty common but I would suggest to move them into the main paper from appendix.
4. There is a big room to improve for writing. Too many typos in both grammar and math expressions, not being cited properly, i.e., Tweedie's formula in the ``background``; unclear descriptions, i.e., the diffusion models described in the ``background`` and ``methodology`` are not referred to VE or VP diffusion models until the appendix, and mathematical notations appear without proper explanation, i.e. $\nu_t(x^{t:T})$ the first occurrence on line 94, where its meaning is not clarified. The lack of coherence between sentences and paragraphs further hampers the clarity of the paper.
[1]: Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems, 2021.
[2]: Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
[3]: Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In International Conference on Learning Representations, 2023.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. ``Choosing weighting function $w_T(x^T) = w_t(x^t, x^{t+1}) = 1$ for $t = 1, \dots, T$'' constantly appearing in the paper to leave the proposal functions only in the equation. Are you assuming no resampling here and no particle is being discarded? If so, please make it more clear.
2. For weighting function on line 95, $w_k^t$ is a function related to $x_k^t, x_k^{t+1}$ and no $y$ is involved, so what is the point of not including $y$ here? When twisted weighting function is defined on equation 15, it is related to $y$ by $\tilde{p}_\theta(y | x^t)$ though.
3. On line 141, ``recall that $\hat{x_0}(x^t) = \mathbb{E}_{p_\theta}[x^0 | x^t]$ if $p_\theta$ is optimized to exactly match the true distribution $q$''. What if the assumption does not held perfectly? Is there any statistical analysis to justify this?
4. On equation 10, ``the approximated optimal twisting functions is $\tilde{p}_\theta(y | x^t) := p(y; \hat{x}_0(x^t, t))$''. Is there any analysis (if this idea is from [5], please ignore this question about analysis) or experiment result for not denoising $x^t$ back to $x^0$ for comparison?
5. Is the first experiment two-dimensional? What if it is high-dimensional?
6. What's the computational cost for various number of particles? Is there any variance reported for different number of particles for each experiment?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I would expect the authors address the weakness and questions aforementioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review and are glad they appreciate the inspiration for the method. We hope the new clarifications and benchmarks described in the below provide improved support for the progress we have made in this direction.
__Weaknesses:__
1. The first comment on weakness has several components that we address independently:
- **Significance between the connection between SMC and diffusion models.** The significance is that it allows us to use heuristic approximations to conditional sampling in diffusion models to implement a practical and asymptotically accurate conditional sampling procedure (See e.g. lines 51-57).
- **Comparison to conditional approaches.** We note that class information is required for TDS since the goal is to generate samples given a class. The advantage of TDS is that it does not require additional training of a classifier on noisy inputs diffused at various steps (as in classifier-guidance), or a conditional diffusion model trained on class information (as in classifier-free guidance). TDS can operate on a pre-trained unconditional diffusion model and a standard classifier trained on clean inputs. See also our response to weakness (2).
- **Novelty of equation 10.** With regard to Eq.10, the insight is that such approximations (previously suggested in [5] and also [15,21]) can enter an SMC procedure as a “twisting function”. While we have discussed this connection in line 151 as well as in related works line 519, we will make it more clear in the main text.
- **Title.** We appreciate the suggestion to change the title, as we recognize that the substance of our contributions build more heavily on existing SMC methods than one might have thought. **Our new proposed title is: "Practical and Asymptotically Exact Conditional Sampling in Diffusion Models"**
2. At the reviewer’s suggestion we now include applications to CIFAR10 and ImageNet, and compare to a conditionally-trained classifier guidance baseline, and believe the results (described in the high-level response) strengthen the paper.
3. We appreciate the suggestion and will move the discussion of related work into the main text. We will either additionally move the image inpainting results into the main text (space allowing) or strike the references to these experiments from the abstract and conclusion.
4. We appreciate the comments on writing and will add appropriate references and discussions of the connections to VE and VP models to the main text.
__Questions:__
1. Yes. When weights are uniformly one, one can skip resampling steps to reduce the variance of the procedure (see footnote on page4).
2. We thank the reviewer for pointing out this omission. We have left “y” as an implicit input, but will make its omission explicit in the revision.
3. Yes, this is a great question. If it does not hold perfectly, the distribution of the samples returned by TDS has some deviation from the exact conditional distribution. A primary contribution is to show that this deviation may be reduced arbitrarily by increasing the number of particles (Theorem 1).
4. This approximation does indeed come from reference [5] (inheriting from [15] before it). See also our response to weakness 1. Not denoising back to x_0 is an interesting idea, but we suspect it would not provide a sensible result because noisy states would be very dramatically out of distribution for most likelihoods as compared to the denoised predictions.
5. The first experiment is indeed two dimensional. This experiment was chosen to be as small as possible to be illustrative; we intended the 784-dimensional MINST experiments to illustrate a higher dimensional toy problem, and hope that our ImageNet experiments further address concerns about higher dimensional problems.
6. The compute cost increases linearly with the number of particles, and the inverse variance increases linearly with the number of particles. This rate is a consequence of standard SMC results and can be seen from the slope of -1 (on log-log-scale) in our simulation experiments.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I would like to thank the authors for their answers and the additional information. Most of my questions and concerns were touched and addressed. For the additional experiments on ImageNet, I can see a big improvement on classification accuracy when the classifier guidance can already achieve 99% accuracy; however, it is noteworthy that the quality and diversity of the generated samples, as indicated by the FID and Inception scores, have notably decreased.. These induced to two questions: 1) Is it justifiable to sacrifice sample quality and diversity to improve classification accuracy by applying TDS? This is without considering the increased time consumption resulting from the higher number of particles. 2) I may still not fully understand it, but can it be considered a valid assumption to employ TDS performed upon a pre-trained classifier guidance, particularly when the training a classifier is time-intensive?
I would be happy to update my score once my concern is addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We reply to the questions and then expand on our FID results, which we feel may appear artificially bad:
(1) For class-conditional generation, if one has access to an already-trained noise-level-dependent classifier, we suspect it is not worth running TDS. TDS is not meant to address this setting.
(2) Yes. In many settings one may have access to a classifier trained (e.g. by someone at a different organization) on _only noise-free data_, and wish to guide their generations. In such settings, running TDS would be possible without requiring either (a) further training or even (b) access to a labeled dataset. By contrast, both (a) and (b) are requirements of classifier guidance.
We next clarify our FID results. You are correct that we reported lower FID scores with TDS relative to classifier guidance. However we suspect this decrease owes primarily to a decrease in diversity, rather than a decrease in quality. In particular, our evaluation was roughly matched on inference-time computation cost; for classifier guidance and TDS (P=1) we generated 16 _independent samples_ per class, whereas for TDS (P=16) we generated 16 _dependent_ samples for each class due to resampling (in total, 16,000 images were generated for each method). Using TDS P=16 increases quality relative to P=1 (as suggested by the visual results, classification accuracy, and inception score) but the dependence between particles lessens diversity and thereby increases FID. To demonstrate this, we have run TDS (P=16) a second time and merged the samples. In this case, we obtain an FID score of 22.68 and suspect we would see further improvements with additional independent runs of TDS (but have not done so due to the computational resources available on short notice). To match the previous evaluation size, we also obtain the average FID score of 23.75 (with std of 0.17) evaluated on 16k randomly selected samples out of 32k combined samples over 10 random selections. We note that these FID scores are lower than that of samples from the unconditional model (26.2). In practice, one can increase the diversity of TDS samples by (1) running TDS for multiple individual runs, or (2) reducing the resampling frequency so particles can be less dependent on each other (in our experiment we resample at every step). | Summary: This paper addresses the challenge conditioning in unconditionally-trained diffusion models. The most successful approaches often require explicitly training on conditional data. This paper frames sampling from such conditionals as an SMC procedure and proposes Twisted Diffusion Sampler (TDS), a method derived from the twisting technique in SMC literature. TDS uses a classifier trained on the clean, non-noisy data to construct a sub-optimal proposal distribution for sampling from the diffusion process. This proposal is then used in the twisted SMC framework to get weighted particles that more closely approximate the conditional distribution of interest.
The authors prove that their proposed SMC method targets the correct distribution and is asymptotically exact. Furthermore, they show how to extend TDS to work for various inpainting problems and Reimannian diffusion models.
Finally, the authors empirically verify TDS's correctness on a simple synthetic diffusion model with tractable score functions. Then they show its effectiveness on more realistic tasks of class-conditional MNIST image generation and inpainting. They lastly show TDS achieves state-of-the-art results on motif-scaffolding problem in protein design.
Strengths: The central contribution of the paper is introducing the twisted SMC framework for sampling from diffusion models. While the twisted SMC is not novel, applying it to diffusion models and demonstrating use cases is significant. Moreover, the extension of TDS to some settings other than class-conditional framework was very interesting to me.
In terms of clarity, this paper is mostly clearly written. There are some typos and clarifications required that I have listed in the questions section.
Weaknesses: I think the main weakness of the paper lies in the experimental section. First, the experiments are rather small scale. It would be nice to see how their method perform on larger scales standard tasks such as class-conditional CIFAR or ImageNet generation. Second, I expected to see more baselines in the experiments. The paper is motivated by pointing out that many existing methods require additional training. I believe it such methods should be included as baseline. For example, classifier-guidance with a proper classifier trained on noisy data or classifier-free guidance would be good baselines to compare against. Admittedly, they might outperform TDS, particularly with smaller K. However, it would be a good upper bound to have. Moreover, it gives insights on the tradeoff between a large K or training a separate model.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. I am not sure if what the authors refer to "reconstruction guidance" is correct. Quoting from the related work section "the use of denoising estimate $\hat{x}_0(x^t, t)$ to form an approximation $\tilde{p}_\theta(x^t | x^{t+1}, y)$ to $p_\theta(x^t | x^{t+1}, y)$ is called _reconstruction guidance_." I am not sure if this is correct. As far as I know, reconstruction guidance is proposed in [1] and improves the replacement method [2] which is for inpainting applications. Moreover, the other two citations are [3,4] (citation numbers 5, 25 in the paper) which are unrelated to reconstruction guidance (this might be related to the question 3 below though)
2. In the related work section (App. B), on line 523, there is a "[33]" citation which does not exist. The reference 29 in the paper ([5] here) seems to be what it meant to refer to. This paper, however, does not do reconstruction guidance as far as I can tell.
3. Looking at the rest of the related work section, seems like most citation numbers are incorrect. For example citations 26 and 18 on lines 524 and 528, respectively do not seem to refer to the correct paper. I suggest double-checking the citations.
4. In section 3.3, it is not clear to me how Eq. (18) is derived and whether it is a convenient likelihood that often works in practice or it is a general result. I did not see an explanation in the appendices either. I think it would be helpful to clarify this.
5. In the experiments section
1. "Guidance" is defined as "TDS" with one sample. If I understand it correctly, it would be equivalent to "TDS-IS" with one sample. Is this correct? If so, I expect it to be outperformed by "TDS-IS" with more samples while the middle and right panel of Figure 1 shows otherwise. Additionally, in Figure 2(a), I do not understand why would "TDS-IS" show less diversity than "guidance". Why would more particles destroy the diversity?
2. I am wondering if the authors have insights on why in Figure 2(b) ESS jumps back up very close to t=0.
6. Minor issues
- Section 3 and subsection 3.2 have the same name.
- The background section on diffusion models explains diffusion processes without scaling (e.g. the variance-preserving process).It is worth mentioning in the background section that not all diffusion models follow this exact framework.
- I think it's nicer if Eq. 11 is written to define $\tilde{p}_\theta(x^{t-1} | x^{t}, y)$ instead of $\tilde{p}_\theta(x^t | x^{t+1}, y)$ to be similar to Eq. 3.
7. Typos
- Line 9: the -> that
- Line 99: provide -> provides
- Line 148: eq. -> Eq.
- Line 153: I might be wrong, but I guess $s_\theta(x^t, y) = \nabla_{x^t} \log q(x^t, y)$ should actually be $s_\theta(x^t) = \nabla_{x^t} \log q(x^t)$. Otherwise, the assumption of $\tilde{p}_\theta(y | x^t) = p_\theta(y | x^t)$ becomes unnecessary for Eqs. (13,14).
- Line 157: $w_T(x^T) := \tilde{p}_\theta(x^T)$ -> $w_T(x^T) := \tilde{p}_\theta(y | x^T)$
- Linr 218: sytematic -> systematic
- Lien 570: systematica -> systematic
- In the second line of equations under line 578: $p(y | x^{t'})$ -> $p(y | x^{t})$
[1] Jonathan Ho, Tim Salimans, Alexey A Gritsenko, William Chan, Mohammad Norouzi, David J Fleet.
"Video diffusion models".
NeurIPS 2022.
[2] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole.
"Score-based generative modeling through stochastic differential equations".
ICLR 2020.
[3] Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, Jong Chul Ye.
"Diffusion posterior sampling for general noisy inverse problems".
ICML 2023.
[4] Jue Wang, Sidney Lisanza, David Juergens, Doug Tischer, Joseph L Watson, Karla M Castro, Robert Ragotte, Amijai Saragovi, Lukas F Milles, Minkyung Baek, et al.
"Scaffolding protein functional sites using deep learning".
Science 2022.
[5] Guanhua Zhang, Jiabao Ji, Yang Zhang, Mo Yu, Tommi Jaakkola, and Shiyu Chang. "Towards coherent image inpainting using denoising diffusion implicit models".
arXiv preprint, 2023.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitation of the proposed method is the additional sampling cost due to multiple SMC particles which is mentioned in the discussion section of the paper. Otherwise, I do not see other limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their very careful and detailed review. We are glad they found it to be significant and interesting. We found the questions and weaknesses noted helpful and believe we have addressed them below.
**Improving the experimental validation:** As noted in the response to all reviewers, we have implemented the suggestion to include class-conditional CIFAR and ImageNet generation, and find TDS to be competitive with classifier guidance (albeit at larger inference-time cost). As the reviewer suggested, classifier guidance does indeed outperform TDS with just a single particle but performs similarly by Classification accuracy (see table 1 in high-level reply).
**Replies to questions:**
1. Reconstruction guidance uses the reconstruction estimate $\hat x_0(x_t)$ as a proxy for the true conditional mean $E[x_0|x_t]$. Eq 7 in VDM paper [15] demonstrates the use of reconstruction guidance gradient obtained by backpropagating through the denoising network. In the inpainting case, TDS with P=1 coincides with the reconstruction guidance. We will make this connection more clear in the revised paper.
2. The reviewer is correct that these references were wrong. We apologize for our mistake, which resulted from an accidental reordering of references between when we compiled and submitted our main text and our appendix files.
3. See (2) above.
4. We thank the reviewer for pointing out that Eq.18 was unclear. The final proposal and weights in Eq.18 are chosen to ensure that the final target is the exact conditional distribution of interest. The intuition is that as defined in Eq.18, tilde $p (x_0 | x_1, y) = p (x_0 | x_1, y)$, and as a result reweighting is not needed. By substituting equation 18 into the expression for $\nu_0$ in equation 16, we obtain the desired target. We will clarify this in our revision.
5. This question touches on both TDS-IS and ESS plots:
- *TDS- IS.* We thank the reviewer for pointing out this surprising behavior. We have investigated it and discovered that it owes to a bug in the implementation of our simulations; in short, we used an incorrect and unstable implementation of the final step weights. Our attached PDF presents the results for inpainting (Figure 4) where we find that the results agree with the reviewer's expectations. Our revision will include corrected version of the remaining three simulations with this bug-fix.
As for why TDS-IS provides lower diversity as compared to guidance in Figure 2a, this is in essence a bias-variance trade-off. TDS-IS explicitly weights and resamples based on the likelihood approximation at the last step. This resampling replicates some samples and eliminates others, which reduces diversity giving some samples that are identical to one another. By contrast, guidance generates independent samples and does not use importance weighting and resampling, hence resulting in a higher diversity but worse average quality.
- *ESS jumps near t=0.* This is a good question which we have wondered about as well. We report additional ESS traces in Figure 5 showing that this jump happens irregularly; we see it regularly for MINST, sometimes for CIFAR10, but usually not on ImageNet (Figure 5A, 5B, and 5C). Mechanically, the drop in ESS implies a large discrepancy between the final and intermediate target distributions. We suspect such discrepancies might arise from irregularities in the denoising network near t=0, and will add a brief discussion of this behavior and its implications on particle diversity and the value of truncating resampling in the final steps.
6. This question touches on section titles, diffusion model background and time-indices.
- *section 3 and 3.2 titles.* We thank the reviewer for pointing this out. We will provide more specific headings: “3: Twisted Diffusion Sampler: SMC sampling for diffusion model conditionals" and “3.2: Twisting functions and convergence for smooth likelihoods”.
- *diffusion model background.* We thank the reviewer for pointing out this confusion. We omit the scaling factor to simplify demonstration, and include the discussion of variance preserving and variance exploding models in appendix D. We will add a reference to the appendix in the revised background section.
- *time indices.* We appreciate the suggestion. However we prefer to keep this choice of indexing to better agree with the indexing we use for sequential Monte Carlo. Unfortunately, this cannot instead be resolved by changing the indexing of diffusion models in equation (3) without requiring $x_{t+1}$ as an argument of the denoiser and score model throughout.
7. We thank the reviewer for identifying these typos. These were all correctly identified and will be resolved in the revision.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I appreciate the authors' effort to conduct further experiments and their clarifications. Most of my questions are answered and I will raise my score. However, the new results do not demonstrate a clear advantage from the techniques introduced. In particular, the scores of TDS with one particle (which is equivalent to "naive" guidance) very close to TDS with 16 particles. The exception is inception score where TDS (P=16) shows a higher score suggesting more sample diversity, but the diversity of reported images is very limited.
Nonetheless, I still believe the proposed method and the details of implementing it in different application is elegant and fruitful to the community. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed comments and suggestions. We are pleased that the reviewers found our method to be “practical”, “original”, “theoretically well-founded”, and to give “state-of-the-art” results in protein design. We believe we have addressed all suggested weaknesses, and have improved the submission both by adding several new experiments, and through revisions to the text. Notably we apply TDS to 256x256 ImageNet class-conditional generation and find TDS a viable alternative to Classifier Guidance which requires expensive training of a classifier on noisy inputs.
Before describing these changes we provide additional high-level context and summarize our contributions.
Our submission proposes an algorithm, TDS, for accurately estimating conditional distributions implied by diffusion models useful in cases when it is desirable to trade off sampling-time with accuracy. For example, the development of protein-based drugs might involve (1) developing and training a diffusion model of protein structures, (2) sampling candidate protein structures from the model, and (3) experimentally and clinically validating candidate proteins. While steps (1) and (3) might require years, sampling (2) takes only seconds. The proposed algorithm would apply in this second step. On several full-scale protein tasks, we show that TDS significantly outperforms the previous state of the art, a conditionally-trained diffusion model [26]. Moreover, we show the algorithm allows one to target conditional distributions to which previous methods do not even apply. In addition, we evaluated TDS on synthetic problems and well-studied computer vision benchmarks.
We now move to shared concerns with discussions of new experiments with figures in the attached PDF.
__Higher-dimensional image problems and comparison to baselines.__
Reviewers 8aww, UG6i and BFYS noted that our submission had not explored performance on high-dimensional problems. To this end we have included additional experiments on class-conditional generation on CIFAR10 and 256x256 ImageNet.
- ImageNet: we compare TDS (with # of particles P=1,16) to Classifier Guidance (CG) using the same unconditional model from [here](https://github.com/openai/guided-diffusion/tree/main). CG uses a classifier trained on noisy inputs. For TDS, we use the same classifier evaluated at timestep = 0 to mimic a standard trained classifier. We generate 16 images for each of the 1000 class labels, using a guidance scale of 10 and 100 sampling steps. Notably, given a fixed class, TDS(P=16) generates correlated samples in a single SMC run, and TDS(P=1) and CG generate 16 independent samples.
TDS can faithfully capture the class and have comparable image quality to CG’s (Figure 2, class: bramblings), although with less diversity than CG and TDS(P=1). Figure 3 shows more samples given randomly selected classes. We also reported results of the unconditional model from [1] that are evaluated on 50k samples with 250 sampling steps (Table 1). TDS and CG provide similar classification accuracy. TDS has similar FIDs compared to the unconditional model and better inception score. CG’s FID and inception score are better than TDS. We suspect this difference is attributed to the sample correlation (and hence less diversity) within particles in a single run of TDS(P=16).
- CIFAR10: we ran TDS (P=16) with guidance scale = 1 and 100 sampling steps (Figure 1) using diffusion model from [here](https://github.com/openai/improved-diffusion) and classifier from [here](https://github.com/VSehwag/minimal-diffusion/tree/main). TDS generates faithful and diverse images. However, we found TDS can occasionally generate off-the-manifold samples for the class ‘truck’ (Figure 1F).
__Reconstruction guidance, video diffusion models (VDM) and baselines terminology.__ Reviewer bGjC suggested comparing TDS to the reconstruction guidance approach presented in “Video Diffusion Models” [15], and reviewer 8aww identified that our description of this baseline (first introduced in [15]) was unclear. We compared TDS to reconstruction guidance in our submission under the name “guidance”, a choice of terminology we will clarify in our revision.
In brief, we used “guidance” to describe a generalization of “reconstruction guidance” to classification problems and inpainting with degrees of freedom (App. B); in the inpainting case, it exactly coincides with reconstruction guidance [15]. Our approach may be seen as wrapping this heuristic approximation in an SMC sampler to improve accuracy by adding additional compute (lines 53-55). See our reply to reviewer bGjC for details.
__Compute Cost.__ We will include the below discussion in our revised manuscript.
- Classifier guidance vs TDS: In Classifier Guidance [8] one trains a noise-level dependent classifier. One therefore incurs a large, up-front computational cost to train the classifier. It amortizes the inference problem and allows conditional samples to be generated in a single trajectory and so is fast at inference time. By comparison, TDS demands compute that is (1) linear in the number of particles used and (2) higher by a constant factor due to backpropagating through the denoising network. In the ImageNet model, TDS takes 0.34s to generate one particle at a time step, and Classifier Guidance takes 0.15s (results averaged over 100 samples on a V100 GPU). So TDS requires 120% more gpu time in this instance. Notably, training a noisy classifier required by Classifier Guidance can take around 330 gpu hours.
- TDS vs RFDiffusion on motif-scaffolding: TDS(P=8) is faster than the state of the art baseline (RFDiffusion). On a 100 residue test case (1QJG) on an A4000 gpu, run-time was 80 seconds for TDS, as compared to 150 seconds for RFdiffusion. In both cases the models used 200 diffusion time-steps. The slower speed of RFdiffusion owes to its use of a larger neural network.
Pdf: /pdf/d9e85c020db33254b2e68b90bb99cd3e14385bce.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline | Accept (poster) | Summary: The authors propose a simple method to perceive the length of an LLM response by asking the LLM. Then, the authors propose to groups queries with similar response lengths into micro-batches, which are then allocated to different GPU nodes and processed in parallel . The authors show empirical gain in terms of throughput. This approach is also orthogonal to other inference acceleration approaches.
Strengths: 1. The proposed method is simple and exact and does not sacrifice response quality while achieving speedup.
2. The authors show impressive speedup in terms of throughput.
3. The speed up the author achieve can be applied on top of other inference acceleration techniques.
Weaknesses: 1. One fundamental weakness of the paper lies in the assumption that requests can be reordered, which may not hold in production. The author did not show any fairness metric which may help readers understand how their method affect each individual request in practice.
2. The author did not show any fine-grained ablation studies examining how often and how often the requests have been reordered and how it affects inference latency.
I would happily raise my rating if the authors can present more thorough ablation studies.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have listed my concerns above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments. We are pleased to see that the reviewer acknowledges our contribution. The questions are answered below.
**Weakness-1**: The foundation of our method is not based on the assumption that individual requests can be "reordered," but rather on the expectation that a "group" of requests is received simultaneously (as mentioned on line 192). In real-world production environments, big companies such as OpenAI and Google, the volume of requests is substantial and continues to increase over time. For instance, in April, ChatGPT handled approximately 1.7 billion requests, resulting in an average request rate of nearly 700 per second. This exceeds the scale of our experimental setup, where the group size was set at 256. Conversely, smaller companies with a limited number of requests can opt for stream inference instead of batch processing. Our method will become increasingly critical as AI technology continues to advance and become more widespread.
We think a fairness metric for measuring individual request delay is very important. We introduce three metrics for comparison: max wait time, average wait time, and their ratio (max wait multiplier). We define the wait time for a user to be the delay from receiving the request (start of processing a group) to generate the corresponding response. With group size 256, our method saves 63% average wait time compared to the vanilla. Due to the FCR mechanism, our max wait multiplier is 2.7, which is 1.4 times the vanilla's one. However, with inference speed acceleration, the max wait time is also reduced by 48%.
**Weakness-2**: Our method emphasizes that the order of requests is not a crucial factor. The key lies in how we assemble batches. Assuming that requests in a group arrive simultaneously, reordering the index has minimal additional overhead. The main inference latency is primarily influenced by the perception of response length rather than the reordering of indices.
Once batches are formed, we can process them in any order. While the order of batch processing may affect the latency for individual users in our setup, the latency is constrained by the processing time of a group. Batch shuffling can be employed to mitigate any order-related impact. In a multi-GPU environment, batches containing different predicted response lengths are dispatched to different GPUs and processed simultaneously. In this scenario, there is no explicit ordering; it is solely about batch assembly.
One factor influencing inference latency is the Failure Collection and Recomputation (FCR) ratio. This ratio represents the proportion of FCR samples recalculated at the end of a batch, which causes delays in processing. To assess the effect of different FCR ratios, we modify the predicted length by a constant value 'k'. A larger predicted length results in a lower FCR ratio (more tolerable). However, a long response with a short predicted length may introduce more waste generation in batch (generating 'k' more times). The experimental results, as shown in the table, demonstrate the average and maximum wait times compared to the vanilla method, with FCR time indicating the proportion of time utilized for FCR processing.
| k | Avg. | Max | FCR ratio | FCR time | Throughput (samples/s) |
| --- | --- | --- | --------- | -------- | ---------- |
| -50 | 42% | 63% | 34% | 21% | 1.80 |
| -10 | **37%** | 53% | 20% | 20% | 2.10 |
| 0 | **37%** | **52%** | 15% | 20% | **2.27** |
| +10 | 45% | 52% | 14% | 14% | 2.20 |
| +50 | 38% | 55% | 7% | 7% | 2.10 |
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional experiments. The author addressed my concerns and I am raising my rating. Please include the additional results in the final manuscript.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will incorporate the additional results. | Summary: >**Rebuttal:** The provided details satisfy my concerns. I think this paper should be accepted after applying the agreed changes.
>**TL;DR:** The paper presents a new technique to reduce the inference time of LLMs under intensive usage. This is an important problem that can reduce wasteful computations. However, the paper is missing some key comparisons and the experimental methodology is lacking justifications. Addressing my concerns and questions would improve my score.
This paper proposes a new to reduce the inference time of LLMs under intensive usage. The technique save wasteful computations by predicting the response length and aggregating similar predicted lengths together. The paper presents an inference pipeline, which is composed of response length prediction, failure collection and recomputation (FCR), and variable batch size (VBS).
Experimental results on real-world instruction datasets using the Vicuna-7B model demonstrated an 86% improvement in throughput without sacrificing performance quality. The datasets include the Instruction-in-Wild and Alpaca. The proposed technique is compared to previous works and outperforms them on both datasets.
Strengths: * **S.1.** The proposed technique can reduce inference time of LLMs and gain a 86% performance improvement.
* **S.2.** The paper tackles an important problem of reducing the LLM inference time and wasteful computations.
* **S.3.** The proposed technique outperforms previous compared works on two datasets.
* **S.4.** Reproduction code is provided as part of the submission.
Weaknesses: * **W.1.** Th paper lacks comparison to existing LLM inference works such as [1][2].
* **W.2.** Some the key technique attributes are not well justified. For example, the target length prediction is four times the actual length. No experiments or ablations are provided to justify the "four". Another example is the FCR mechanism, which immediately stops the generation process when the maximum predicted length has been surpassed. There might cases where the generation process might only need a few more tokens to complete the task.
* **W.3.** The proposed technique relies on very short inputs in order to be effective. This is rarely the case for chat-bot, which require support for multi-turn conversations and are given as the examples for LLM usage. Furthermore, the proposed technique relies on high LLm usage to create batches of similar predicted lengths. This high usage is typically found in chat-bots.
[1] Yu, Gyeong-In, Joo Seong Jeong, Geon-Woo Kim, Soojeong Kim, and Byung-Gon Chun. "Orca: A distributed serving system for {Transformer-Based} generative models." In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22), pp. 521-538. 2022.
[2] Pope, Reiner, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Jonathan Heek, Kefan Xiao, Shivani Agrawal, and Jeff Dean. "Efficiently scaling transformer inference." Proceedings of Machine Learning and Systems 5 (2023).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * **Q.1.** The paper describes several LLM use cases such as ChatGPT, Bard, and Claude. These chat-bots require support for multi-turn conversations, which expands the conversation history. How would the proposed technique work in such cases?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations of the proposed technique are described throughout the paper. The limitations include overhead of length prediction and poor compatibility with long inputs. However, the latency effects of waiting for the aggregation of batches is not discussed or evaluated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments. We are pleased to see that the reviewer acknowledges our contribution. The questions are answered below.
**Weakness-1**: One significant strength of our method is its compatibility with various existing toolkits. In [2], the proposal involves selecting the best multi-dimensional partitioning techniques optimized for TPU v4 slices, which focuses on operator-level improvements and is orthogonal to our method.
[1] introduces the Orca system, which selectively applies batching only to a few operations instead of processing an entire batch of requests by "batchifying" all tensor operations comprising the model. In comparison, our method is more scalable as it does not split operations and can achieve better performance with a larger group size. Moreover, our approach is simpler to implement and can be integrated into the existing inference pipeline without the need for a completely different inference system like Orca. Additionally, our method can be combined with Orca for potential further improvements.
**Weakness-2**: The term "four times" might be misunderstood. It refers to the process of generating the response for each request four times and then selecting the maximum length among the four generated responses (the reason for choosing the maximum instead of the mean is explained in line 225). This process is not about predicting a target length four times the actual length. Instead, the predicted length is determined by the predictor, and the length prediction for each request is dependent on the specific sample.
To relieve the concern of FCR mechanism, we modify the predicted length by a constant value 'k'. A larger predicted length (more tolerable) results in a lower FCR ratio. However, a long response with a short predicted length may introduce more waste generation in batch (generating 'k' more times). We can view the FCR ratio as a tradeoff between the above-mentioned two factors. The experimental results, as shown in the table, demonstrate the average and maximum wait times (Avg. and Max) compared to the vanilla method, with FCR time indicating the proportion of time utilized for FCR processing.
| k | Avg. | Max | FCR ratio | FCR time | Throughput (samples/s) |
| --- | --- | --- | --------- | -------- | ---------- |
| -50 | 42% | 63% | 34% | 21% | 1.80 |
| -10 | **37%** | 53% | 20% | 20% | 2.10 |
| 0 | **37%** | **52%** | 15% | 20% | **2.27** |
| +10 | 45% | 52% | 14% | 14% | 2.20 |
| +50 | 38% | 55% | 7% | 7% | 2.10 |
When using an accurate length predictor, it is advisable to directly utilize the predicted length with a 15% FCR ratio. This approach strikes a balance between the time spent on FCR recomputation and the time wasted during batch computation. In some cases, the generation process might require just a few more tokens to complete the task, but it is difficult to determine which response will finish or if it will finish even after generating more tokens. This concern is also the reason why the maximum length among four times generation is used, as it aims to reduce the potential FCR ratio.
**Weakness-3**: We recognize that high LLM usage is essential for creating batches with similar predicted lengths in chat-bot inference. However, our method does not assume anything about the input length, and long inputs do not significantly impact our approach. In Fig. 4, we observe that the processing time for inputs (only one forward pass) is relatively small compared to the response generation process, which remains the bottleneck for chat-bot inference due to token-by-token generation and multiple forwards.
Despite the Instruction-in-Wild dataset containing long length requests, our method still manages to improve throughput compared to the vanilla approach. Additionally, we acknowledge the limitation discussed in line 288 and believe that further enhancements can be made in input length scheduling. Nevertheless, we emphasize that the input processing is not the bottleneck for the chat-bot; it is primarily the response generation process that poses the greatest computational challenge.
**Question-1**: In multi-turn conversations, predicting the response length for each turn is feasible based on the conversation history. The method used depends on how the conversation history is saved. If only texts are preserved with key-value (kv) keys dropped, it becomes a long input request, and it can be directly handled. On the other hand, if the system keeps the kv-cache, offloading, and compression techniques (as discussed in supplementary section C) can be employed to achieve faster responses. To further enhance the approach, implementing strategies such as limited window size and input length scheduling shows promise as future work.
**Limitation**: We show the Avg. and Max waiting time compared to the vanilla method. We define the wait time for a user to be the delay from receiving the request (start of processing a group) to generate the corresponding response. With group size 256, our method saves 63% average wait time compared to the vanilla. Due to the FCR mechanism, our max wait multiplier is 2.7, which is 1.4 times the vanilla's one. However, with inference speed acceleration, the max wait time is also reduced by 48%.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the detailed answers and results. This solves some of my concerns.
However, I'm not convinced regarding W.3 and partially W.1.
* **W.1.** I'm not fully convinced that there is not a single existing algorithm that can be compared to. Adding a detailed explanation including orthogonal compatible approaches should be sufficient.
* **W.3.** This is my main open concern. The provided Fig. 4 is computed on instruction data. The input of instruction data is usually short, and the generated outputs are usually longer that the input. This is typical for instruction based datasets, but not for real world applications. In cases where the inputs are long, the Response Length Prediction time would take a large portion of the computation time. For example, let's say the task is to give a score to a very long passages. In this example, the inputs are very long and the outputs are short (a single integer). The Response Length Prediction model would take just as long as the actual inference and thus would almost increase the total inference time by two times. This is of course an edge case, but it is an actual limitation of the paper which is not explored.
Addressing my concerns would improve my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. Let's first discuss about your main concern **W.3**.
- We discussed the reviewer's concern in the "Limitation and Discussion" section (line 288). While long input contexts have overheads, they don't predominantly appear in real-world applications. As you mentioned, the lengthy input is an edge case. For such cases, we recommend using traditional pipelines for lengthy inputs and our method for others. Specifically, since the input length is known, we can simply fallback to naive inference pipeline by setting a threshold. For instance, when the input length is longer than 512 tokens, we can always adapt naive inference instead. This would achieve a better trade-off and combine the merits of the both approachs. A future direction might merge input-length scheduling with our output length strategies for optimized inference.
- Table 2 highlights that real-world chatbots like GPT-4 and Claude can perceive response length. We can thus employ the Perception in Advance (PiA) method (introduced line 95) as detailed in Appendix Section C. In this case, the length prediction and response generation model is the same one and thus the kv-cache can be reused. Using PiA's kv-cache for output creation avoids additional computational steps for very long inputs. However, without public access to these models' weights, we can't provide experiments on them.
- We've covered both strategies in our paper and plan to streamline this discussion in our revisions. We argue that these limitations don't detract from our contribution, as applications with lengthy input contexts can benefit from the mentioned solutions.
For your concern **W.1**, we claim that our method is orthogonal to other methods. Following your advice, we will add a detailed explanation in "Efficient LLM Inference" subsection. We discuss other inference acceleration methods by their categories according to [18]:
1. Optimization strategies such as pruning and quantization [8, 11, 35]: These reduce FLOPs needed for a forward pass. Our method does not affect this and thus is compatible.
2. Mapping and scheduling of operations [5,7]: Our approach retains the transformer's operation type and sequence, allowing existing strategies to apply.
3. Optimizing batch quantization: [10] prioritizes input length, while we focus on output length. [3] focuses in few-shot settings, a minimal usage context.
As a result, most of them optimizes the inference speed in a different dimension and we think comparing our method's speedup with them bring no more insights into our paper. Combining these methods into a comprehensive system is deferred for future work, as it surpasses this paper's scope.
(number reference is the same as the main text) | Summary: This paper comes up with the technique of using LLM to help LLMs’ inference to be more efficient. It predicts the queries’ response length, and group the those with similar response length into the same micro-batch, so that the inference efficiency can be effectively improved.
Strengths: In the experiments, the proposed method gains significant improvement in terms of inference speed.
It is reasonable that the token redundancy leads to inefficient inference in batch.
It is effective and easy to implement for existing LLMs.
Weaknesses: The metric of horizontal axis in Fig. 2 (a) is missing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Imaging a scenario that a user’s query has extreme long response length predicted, would this user wait long time for the response?
How about to use a small model to learn to predict the response length?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments. We are pleased to see that the reviewer acknowledges our contribution. The questions are answered below.
**Question-1**: No matter how extreme length is predicted (we can also assume a worst case: this sample actually gets a very short response), the sample is processed within a group of group size 256 in our experiment. This means the wait time for a user is bounded. We measured that our methods save 63% average waiting time for all users and 48% for the longest waiting users. Although the relative waiting time for the worst case is enlarged by 40% w.r.t. the average waiting time, our method's acceleration makes the real waiting time less for **all** users.
In addition, in a group, we dispatch different samples into batches and the execution order of the batches can be shuffled, which means a sample with extreme length predicted may not be the last one for computation. The situation is even better in a multi-gpu situation. The sequence scheduling only assembles batches and the batches can be processed by different GPUs at the same time. Thus, a wrongly predicted response length will not lead to long time waiting.
Using a small model can be a tradeoff between response length prediction overhead and wasted tokens saved. In Table 4, we show Pooling + MLP can improve by 61%, which is 25% lower than using the instruction-tuned model. In Table 3, a smaller model (GPT-2) achieves worse performance than the Pooling + MLP and thus may not yield better performance.
**Weakness-1**: The metric of horizontal axis is "token" for Fig. 2(a) and we will add it in the revision. | Summary: The authors propose to improve the throughput of the LLM inference systems by correctly predicting the length of the response.
Method summary:
1. Predict the length of the response (Binning length for prediction modules to learn better)
2. Use the prediction to batch the queries with similar prediction to improve throughput (use variable batch size to leverage GPUs while managing memory requirements)
3. Failure collection module to cut-off mispredicted batch evaluation. To ensure that this module is not triggered too often, it is advisable to over-estimate length of the prediction
Strengths: 1. The paper tackles an important problem, provides a simple recipe for the solution and works reasonably well.
2. The evaluation is to the point. Answers all the natural questions that might arise.
Weaknesses: The writing can be better. Some details in the questions section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Comments:
1. For table two to make sense, it would be useful to have some data statistics like distribution of response lengths on those 175 instructions.
2. Table 3 caption can be improved a lot. It was not clear what i am looking at first. The table talks about vicuna model inference and various prediction methods for that. Elaborate on the caption.
Questions:
1. In table 4. why is the vanilla avg length significantly different from other lengths including ground-truth predictor?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments. We are pleased to see that the reviewer acknowledges our contribution. The questions are answered below.
**Question-1**: We appreciate the reviewer's clarification on the 'Avg. length' metric. Indeed, the metric does not means the average length of the generated response but rather measures the average token generation. The latter one includes wasted tokens due to short sentences waiting for the long one to be completed. In fact, since we do not change the model, with a same random seed, the response for a specific prompt under different inference acceleration methods (including vanilla, ground-truth predictor and ours) is the same and so is the response length. Therefore, a smaller Avg. length means less waste in token generation and better performance. The vanilla Avg. length is much worse than the ground-truth predictor because it has a poor performance. We will change the 'Avg. length' metric to 'Tokens wasted' metric for better understanding.
**Comment-1**: The distribution of response lengths between ChatGPT and Vicuna is given in Figure 2.(a) which can be a reference.
**Comment-2**: We acknowledge the need for a clearer caption for the table. In the revised version, we will modify the caption to read "Response length perception performance comparison: we evaluate different prediction method for vicuna model inference length on 10k instructions."
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Thank you for the clarifications. Please make the discussed changes to the manuscript. Hope it gets in!
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will incorporate the changes. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Inferring the Future by Imagining the Past | Accept (spotlight) | Summary: This paper presents an efficient Monte-carlo algorithm to infer goals from single snapshots. The problem the authors consider is the same as in work by Lopez-Brau (2020, 2022) but previous solutions are slow as they apply rejection sampling. In this work, the authors use the insight that one can sample a valid path by sampling the part of the path "before" the current snapshot and "after" the snapshot independently, decresing the computational complexity by a large amount. Further, A-star search is used to find likley paths. The authors validate their algorithm on different grid worlds and provide results of a human study indicating that inferences of their algorithm coincide with human judgements.
Strengths: The problem of inferring goals from a single snapshot is interesting, as a major limitation of most IRL methods is that they only work in a certain context (state/actions of the full trajectory).
The paper is clearly written and easy to understand.
The presented method is much more efficient than previous methods in terms of scalability.
The plausability of the results found by the algorithm is supported by human experiments.
Weaknesses: Quantities are not clearly defined. Are states/actions/goals continuous or discrete? In the formulas, one integrates over states and sums over actions. It might be very helpful for the reader to formally define the MDP and its sets. Further, what are "paths" formally, what does the indexing operator "[0]" mean for paths?
While the main idea of the algorithm to sample the path "before" and "after" is very simple, the overall algorithm seems quite hacky to me: The basic idea is combined with other tweaks such as roulette termination and A-star search to avoid unlikely paths. With these, the considered tasks can be efficiently solved but there is no particularly nice linking theory.
In my opinion, the contribution is not strong enough for a NeurIPS submission as the main (conceptual) difference to previously published algorithms is not that large. The essential difference is that parts of the path before and after are sampled, together with other tweaks.
I would find it helpful if the algorithm clearly indicated which quanitity it computes (p(x | g)), this is not direcly visible.
The plots in Table 1 do not have a colorbar, therefore the precise meaning and interval of the color's values is not clear to me.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The regarded grid worlds all have a discrete state/action space. Is the algorithm applicable to the continuous domain?
When introducing the algorithm, you write that the situation of the agent you consider is an MDP but you would relax this assumption later in the paper. Where do you do that?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are well discussed. To my understanding, only discrete state/action/goal spaces are considered. This point could be added.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments about our paper. We address your major concern about the lack of a linking theory in the **common response**. Here, we address your additional questions:
**Is our method applicable to continuous domains?**
Yes, it is: the cart-pole example in the supplement shows our method in a continuous state space. When revising, we will mention this in the main text. We will also update the paper to acknowledge that we do not address continuous _action_ spaces.
**Where do we relax the MDP assumption?**
Thank you for raising this — we will update the paper to clarify that we relax the MDP assumption in Sec 2.3. (Please see the common response regarding A-star for more information on why we do so, and how we will further revise that section based on the reviewers' comments.)
**Improving presentation and definition of key quantities**
Thank you for the suggestions for improving the paper's presentation. We will update the paper to:
1. introduce notation that defines the state/action spaces of MDPs,
2. formalize "paths" as finite ordered sequences of states, which are indexed and sliced the same as lists,
3. clarify that the output of Algorithms 1 and 2 is an estimate of $p(x \mid g)$, and
4. include colorbars in Tables 1 and 2.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. I still believe that the contribution is quite incremental therefore I keep my original score. | Summary: This papers proposes a new approach to the inference problem of inferring an agent's goal state from information about a single state. This approach is based on the bidirectional Monte Carlo sampling of trajectory sequences, both from the agent's given state $x$ to the goal state $g$, and from an initial state $s$ to $x$. The authors show near convergent inference of the approach with 10 samples on two gridworld based environments, and a word game kind of environment when evaluated against human judgement.
Strengths: - The evaluations show a strong "near" convergence result of only 10 samples, even for multi-stage planning. This indicates their sampling strategy is practically sample-efficient, and a huge improvement over prior approaches requiring more than 300,000 samples (reportedly).
- The connection drawn from path tracing in computer graphics is a fresh insight, which the authors show applies well to the goal/previous states' inference problem, and can potentially have broader scope applications such a for spatial navigation tasks etc.
- The use of human judgement (using 200 participants) for evaluation is also a very reasonable choice instead of hand-engineered reward functions designed by a single person/a few people.
- The authors also show efficient sampling in a variety of other domains (in the appendix) which indicates it is more broadly applicable.
Weaknesses: - _Experiments_: Something I might myself be unclear about, so let me know if that is the case-- there are no evaluations comparing the proposed model in this paper with the model shown by [Lopez-Brau et al 2022](https://psyarxiv.com/4zu9n), is this only because of sample-efficiency? It would be interesting to see how "nearly" convergent their proposed approach is in the cases where it can be applied.
- _Theoretical analysis_: What is the complexity of the sampling with the size of the state space, the number of accessible states, the number of starting/goal states etc-- is it possible to perform some kind of theoretical analysis of this?
I'll be very happy to increase my score on the addressal of these concerns/doubts.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Have you checked how the sampling scales with an increasing number of possible starting states, and an increasing number of possible goal states? It could potentially be interesting to see.
- Why is incremental A-star search used for planning on-line?
- I am wondering if its possible to emphasize more on the scope of the applications of the proposed sampling approach-- what other domains can it be used in, apart from tasks that look like spatial navigation? The word blocks is one good example, also the cartpole experiment. There are intuitive physics based experiments, or reasoning based tasks which involve inferring over a sequence of images say, can this approach be applied there?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors adequately address the limitations of their approach, especially when it comes to sampling over goals, or scaling up to a larger goal space. There is just one minor point here-- the approach is based on a combination of well-founded insights and ideas, which when put together, work well, but there seems to be a lot of hand-engineering/domain-specific engineering in assembling these together, which I think should be acknowledged in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback on our paper. We address your questions below, and in the **common response**.
**Did you compare to Lopez-Brau et al?**
Yes, we indeed show these comparisons in Tables 1/2/3, under the heading of "Rejection." We apologize for the confusion and will revise the heading to make clear that that column refers to Lopez-Brau et al's method.
**Theoretical analysis of scaling**
We agree it would be good to address this, and we will update the paper with a discussion.
The time complexity of taking a _single sample_ is straightforward: it scales linearly with the Russian Roulette termination depth and the number of possible goals, but is constant in the number of possible start states (because we sample backwards in time!). The plots in the **attached PDF file** establish this empirically using the grid-world domain; we will include such figures in the revision of our paper.
(Runtime also scales linearly in the number of accessible "neighbor" states, because we softmax over possible actions. But in practice this number is fixed by the domain, and typically a small constant that contributes negligible overhead. For example, in our grid-worlds the number of accessible states is typically 4 for north/east/south/west.)
Finally, another important factor is the _number of samples_ needed, which depends on how much variance is acceptable. Standard theoretical bounds for Monte Carlo integration hold here: variance reduces linearly with the number of samples, or even better via stratification [1, 2]. Such algorithms are well-studied in graphics [3, 4], and those tighter bounds may also be applicable to our setting.
1. Hickernell. Koksma‐Hlawka Inequality. 2014.
2. Bakhvalov. On the approximate calculation of multiple integrals. 2015.
3. Singh et al. Analysis of sample correlations for Monte Carlo rendering. 2019.
4. Subr et al. Fourier analysis of numerical integration in Monte Carlo rendering: theory and practice. 2016.
**More non-spatial applications**
We are glad the reviewers appreciated our non-spatial examples like blocks and cart-pole, and we are happy to expand our paper with a discussion of more such applications. As suggested, there is indeed scope for applications in intuitive physics and reasoning, such as:
+ Determining the viscosity of a fluid from a static image of it being splashed
+ Interpreting comic books or step-by-step instruction manuals, by inferring what happens between static panels
Additional domains where our method could be useful include:
+ Robotics: walking into a kitchen and immediately recognizing what the chef is cooking, in order to help accordingly
+ Vision: understanding dynamic action in static snapshots of sports games
+ Forensics: inferring/reconstructing what occurred at a crime scene based on observable evidence
**Acknowledging engineering work**
Thank you for raising this - we will revise the paper accordingly. Please see the end of the **common response** for our full remarks on this point.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you to the authors for answering my questions and for the detailed clarifications!
- sample complexity analysis: I see, makes sense. Thanks for attaching the plots, nevertheless.
- revision of writing and scope of work: The authors' response of revising the paper with respect to their contributions and motivation for using incremental A* seems reasonable to me.
Thus, I increase my score to 8 for a Strong Accept. | Summary: This paper presents an algorithm for efficiently inferring the goal of an RL agent from just observing its current (single) state $x$. The method improves substantially upon rejection sampling based prior work by 1) only sampling paths through $x$ by separating the path into past and future 2) sampling the past path backwards in time starting from $x$ and 3) using incremental A-star search for planning.
The efficacy of the approach is demonstrated in two gridworld domains and a word-blocks game (as well as three more in the supplementary). The posterior estimates over the goal of the agent qualitatively match human judgements, and converges several orders of magnitude faster than vanilla rejection sampling.
Strengths: * The studied problem is interesting and potentially impactful for multi-agent RL or human-agent interactions.
* The algorithmic modifications are well motivated and the efficiency improvement above simple rejection sampling is impressive.
* The paper validated the results against human judgements and found a high correlation
* The presentation is unusually clear, and a fun read.
Weaknesses: The quantitative evaluation in Table 3 assumes the ground-truth to be a converged estimate of the presented method. This leaves open the possibility that the TV is better than the rejection sampling baseline because the presented method is biased in some way. The concern is partially addressed with the numerical validation check in the supplementary section B, which in my opinion should at least be mentioned in the main paper. Another way to strengthen the results in Tab 3 would be to also compare to a "ground-truth" estimate from (very?) many vanilla rejection samples. If the presented method is still better at approximating this ground-truth with few samples than rejection sampling with few samples, that would be strong evidence.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * I am unclear about the role of A* in the proposed algorithm (sec 2.3). I assume that $p(x\rightarrow x') \propto \sum_a \exp (\beta(C(x \rightarrow g) - C(x' \rightarrow g)) $ takes the role of the A* heuristic. But in A* this heuristic is used to decide which node to further expand next, and I cannot find the equivalent of this search process in Algorithm 2. Could you please clarify the correspondence of A* with the proposed algorithm.
* I am unclear about the shaded cells in in Table 1. The description states that "Shaded cells were excluded from the analysis because it would be irrational for the agent to be there for any goal". I assume "shaded cells" refers to the gray cells in the rows corresponding to the keys world. But the shading of cells doesn't make sense to me. In row 3 it seems to correspond to inaccessible areas (which make sense to exclude). In row 4 the bottom third is shaded except for a single spot next to the green key. Why would that spot be less irrational than the surrounding ones? In row 5 the leftmost column is shaded which also seems odd. Is it impossible for the agent to start there? Also why are no cells shaded in row 1? Please clarify.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors discuss an important limitation of their work regarding the sampling over goals.
Another limitation that may be worth pointing out is that the separation of paths into independent paths for past and future depends on the environment to be Markovian (this *is* mentioned in the paper). More specifically that the current states $x$ contains all information connecting the past and the future. For more realistic environments this would for example have to include quantities like velocity or any relevant inner state of the agent in addition to the current position. Relaxing this assumption would require accounting for latent states and thus be equivalent to observing a distributions over current states $x$ instead of a concrete state.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful feedback on our work. Please see the **common response** for our remarks on the role of A-star. We respond to the rest of your questions below.
**Table 3 ground truth**
Thank you for raising this concern, which we will address as follows:
1. Mention the correctness checks in the main body of the paper, as suggested.
2. Expand Table 3 with two new columns showing analogous TVs compared to converged posterior estimates from many rejection samples, as suggested. For example, for the start-anywhere grid-world, the numbers using 10,000 rejection samples are below (essentially the same, as expected):
| Method | Ground truth | Total variation |
|---|---|---|
| Rejection | Ours, 1k samples | 0.159 (from paper) |
| Rejection | Rejection, 10k samples | 0.153 **(new)** |
| Ours | Ours, 1k samples | 0.0538 (from paper) |
| Ours | Rejection, 10k samples | 0.0836 **(new)** |
3. As an additional assurance, we can show scatterplots analogous to Fig 4 for our main benchmarks, to emphasize unbiasedness. This is essentially replotting the data in Tables 1/2. As an example, the scatterplot for grid-world is in the **attached PDF file** — the two candidate ground-truth posteriors match nearly perfectly.
**Shaded cells in doors-keys-gems**
We appreciate the detailed reading of our figures. Yes, the caption refers to the grayed cells in the keys domain, and the shaded cells in Row 3 are indeed shaded because they are inaccessible (we will make this clear when revising).
As for the other rows, we will update the paper to explain our reasoning:
+ The shaded cells in Row 4 are excluded because if the agent already picked up the pink key earlier, there is no reason to _then_ move far _away_ from the doors. Similarly, in Row 5, there is no reason to move that far left after picking up the green key. We found that querying repeatedly at those extreme locations confuses human participants about the task setup. So, we excluded most such locations from our experiments.
+ The single cell above the green key in Row 4 is an exception we wished to test. You might think from that position that the agent is trying to collect _both_ keys, leading to uncertainty about the goal. But people (and the model) agree that the agent still wants the red gem.
We will additionally clarify that the agent always starts empty-handed, which may be one source of confusion here.
**Accounting for latent state information**
Thank you for raising this limitation. We believe the cart-pole example from the supplement begins to address this: we make inferences when only positions, not velocities, are visible. It works exactly as you suggest, i.e. by observing a distribution over current states $x$. We will update the paper to explain this in more detail.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for the clarifications, and for providing additional evidence for the unbiasedness of their method. Their response addresses my questions and concerns and would like to reaffirm my recommendation for a strong accept. | Summary: This paper deals with the problem of inferring the goal state $g$ of an agent given only a single state $x$ in a trajectory, i.e., inferring $p(g|x)$.
For this, the authors claim that we need to integrate all possible initial states, and thus, sampling past trajectories is necessary for Monte Carlo estimation.
They propose several techniques to improve the sampling efficiency, and the main technique is to sample the past trajectory in reverse, starting from the given state $x$.
Strengths: The key intuition and motivation are clearly described.
Weaknesses: ### The key assumptions are not clearly stated
I think the main text does not explain the key assumptions, such as
- What is given? A fully-trained agent? A fully transparent environment with perfectly known dynamics?
- How can we perform the past and future sampling?
### Soundness of the main argument
Most importantly, I am not convinced that explicitly sampling the past trajectory is necessary to infer $p(g|x)$.
Since $x$ contains all the information that we have, sampling the past trajectory seems like an unnecessary complication.
I think we can train a predictor that produces $p(g|x)$ directly from a decent amount of offline trajectories.
Moreover, I believe there is a serious mistake in Eq.(2).
The left hand side of Eq.(2) is $p(x|g)$, but the right hand side assumes that every path passes through $x$ and does not consider the cases where $x$ is not included in the path.
As the authors noted in L81-82, most paths would not pass through $x$, so there should be a term that account for this fact.
### Limited scope of the experiments
The experiments are limited to simple toy problems like grid world.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Please state the key assumptions
- Can you prove that sampling the past trajectory is absolutely necessary to predict $p(g|x)$?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for raising several important points about our work. We believe we can address all of your concerns — we respond to them at the beginning of the **common response**.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response.
I admit that I was wrong about Eq.(2). Paths passing through $x$ is expressed in the subscripts of $\pi$.
I also did not fully comprehend the underlying assumptions and problem setting, but after reading the response and other reviews, I think now I understand more clearly.
I'm raising my score, but still have some concerns and suggestions.
- In addition to the stated key assumptions, I think the proposed method requires at least one more important assumption: for importance sampling into the past (L119-121), $p(s | s', g)$ should be easily computable.
This would be trivial if there are only a few possible $s$ for an $s'$, as in the grid world, but it can be challenging in other environments.
- I felt the overall writing is a bit flashy.
The Hemingway and cognitive science stories sound far-fetched, compared to the actual algorithms and experiments.
I think toning it down can improve the delivery of key messages.
- I think this paper has some novelty in academic perspective, but not sure if it can have practical values.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments. We are glad the discussion helped clear up any confusion in the original paper, and we appreciate the new suggestions.
- **Past sampling:** Thank you for raising this — yes, we will revise the paper to note that we assume access to reverse transition dynamics. We will also:
1. Explain how the number of past/future “neighbors” of each state affects the algorithm’s runtime in a given environment (see the discussion on scaling with cLNU).
2. Highlight cart-pole as a case where reverse dynamics cannot be computed analytically, and were instead approximated with a neural network.
- **Practical value:** We will update the paper to clarify that our primary goal is to model human intelligence, not to enable any particular application:
1. We will note that we are concerned with sample-efficiency specifically to explain how humans make such rapid intuitive judgements.
2. We will better situate our work in the cognitive science literature (referenced in our common response above).
3. That said, even though it is not our primary focus in this paper, our method *could* enable potential AI applications in the future (we highlight some examples in our response to cLNU under “More non-spatial applications”). We will add a discussion to the future work section of the paper.
- **Writing:** Thank you for the suggestion — we are happy to adjust/tone down the writing based on the reviewers’ feedback. In particular, we realize that the references to cognitive science may seem extraneous because the current abstract and introduction do not explain that our goal is specifically to model human intelligence. We will adjust the writing to clarify this. For example, here are some candidate edits to the abstract:
> … **In this paper, we seek to model how humans make such rapid and flexible inferences,** even in domains they have never seen before. Building on a long line of work in cognitive science, we offer a Monte Carlo algorithm **whose inferences correlate well with human responses** in a wide variety of domains — while only taking a **small, cognitively-plausible number of samples.** Our key technical insight is to draw an analogy between our problem and Monte Carlo path tracing in computer graphics, which allows us to borrow ideas from the rendering community and dramatically increase the algorithm’s sample-efficiency.
And a more representative tl;dr:
> We **model how humans** infer an agent's goal from a snapshot of its current state. We frame the problem as Monte Carlo path tracing, which allows us to apply ideas from computer graphics to design a **cognitively-plausible sample-efficient algorithm**. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their thoughtful feedback. We address some key concerns below, and the rest in individual responses.
**Is sampling really necessary? Why not fit a neural network? (iHfx)**
Thank you for raising this important point. We realize that our paper's motivation was not fully clear, and we will revise the paper to explain the broader context of our work.
If our goal was only to solve this inference task in a particular environment, we could indeed just fit a model to predict $p(g\mid x)$ from datasets of offline trajectories generated in that environment. But here, we are interested specifically in how humans flexibly make such inferences _without_ extensive pre-training on data -- and how AI systems could do likewise. A long line of empirical work in cognitive science, particularly in Theory of Mind, shows that people (even infants and young children) make rapid, flexible, and robust judgments "out of the box": in novel domains they have never seen before, and without extensive pre-training on data [1-5]. This remarkable one-shot ability is what motivates our work, and our sampling-based algorithm specifically seeks to capture that ability.
1. Gergely, G., and Csibra, G. "Teleological reasoning in infancy: The naive theory of rational action." _Trends in Cognitive Sciences_ 7.7 (2003): 287-292.
2. Baker, Chris L., Rebecca Saxe, and Joshua B. Tenenbaum. "Action understanding as inverse planning." _Cognition_ 113.3 (2009): 329-349.
3. Baker, Chris L., et al. "Rational quantitative attribution of beliefs, desires and percepts in human mentalizing." _Nature Human Behaviour_ 1.4 (2017): 0064.
4. Jara-Ettinger, Julian, et al. "Children’s understanding of the costs and rewards underlying rational action." _Cognition_ 140 (2015): 14-23
5. Hamlin, J. Kiley, Karen Wynn, and Paul Bloom. "Social evaluation by preverbal infants." _Nature_ 450.7169 (2007): 557-559.
**Limited scope of experiments? (iHfx)**
As part of the revisions promised above, we will explain that we based our experiments on tasks studied by empirical cognitive science research. Our experiments are comparable in scale to contemporary related work, both in AI (Zhi-Xuan et al, NeurIPS 2020; Shah et al, ICLR 2019) and cognitive science (Lopez-Brau et al, CogSci 2020). Furthermore, while prior work typically evaluates on only 1-3 domains, we consider a total of 6 domains across a wide variety of conditions (partial observability, multi-agent, continuous physics, etc.).
**Possible mistake in Eqn 2? (iHfx)**
Thank you for the attention to detail! This is a subtle point, but we still believe there is no mistake. We will update the paper to clarify: paths that do not pass through $x$ contribute zero likelihood of being observed at $x$, so we only need to integrate over paths that _do_ pass through $x$ (the "other term" would be zero). More generally, we hope that the numerical check in Supplement B assuages correctness concerns.
**Stating key assumptions (iHfx)**
We will revise the paper to clarify our assumptions: we assume full knowledge of environmental dynamics, and access to a planner to model how an agent would act given goal $g$. Formally, this is captured by $P(s \rightarrow s^\prime \mid g)$, a key input to Algorithms 1 and 2. Finally, we assume the ability to enumerate the neighbors of the current state, which are used for past/future sampling.
**Role of incremental A-star? (yiSb, cLNU)**
We realize that our motivation for using A-star was not clearly explained in Sec 2.3. We will revise the paper to clarify that we use A-star planning _only as an optimization_ — we can obtain the same results using value iteration, as in prior work, or for that matter any other planning algorithm (e.g. our cart-pole example uses deep RL).
We will additionally update the paper to better explain _why_ we use A-star:
1. Prior work formulates problems as MDPs, modeling agents' action choices by softmax over $Q(x, a)$. The drawback is that the full MDP must be solved offline before inference, e.g. by value iteration or deep RL. This is (a) cognitively implausible, and (b) wasteful if only some states are queried at inference time. Additionally, not all tasks are well-modeled by MDPs (e.g. multi-stage long-horizon planning).
2. Our solution is that where possible, we instead formulate problems as classical planning domains (c.f. PDDL) and weight actions by softmax over how much closer an action brings the agent to the goal. That is, for moving onto state $x^\prime$, we use the cost difference $C(x \rightarrow g) - C(x^\prime \rightarrow g)$.
3. The upshot is that we can compute the cost $C$ on-line by A-star search, avoiding the need for precomputation and only exploring relevant states. An additional optimization is to cache/memoize A-star's intermediate computations to avoid duplicate work if calls to $C$ are repeated.
We are happy to explain further if this remains unclear.
**Is this a "hacky" approach with no linking theory? (PVLo, cLNU)**
We realize that our exposition confusingly mixes theory and engineering, and we thank the reviewers for raising this issue. We will revise the paper to explain that there is indeed a core linking theory at play: the theory of Monte Carlo path tracing.
Rather than a grab-bag of "tweaks," we see our paper as organized around one key idea: the analogy between light paths and agent paths. This idea is what allows us to (1) reframe our problem using the theory of Monte Carlo path tracing, and (2) import existing engineering techniques motivated by that theory (e.g. Russian Roulette, bidirectional tracing) and thus solve our problem efficiently.
In short, we agree with cLNU's assessment: our work combines well-founded insights with some engineering work. We will revise the paper to distinguish the theoretical insights and engineering contributions we offer.
Pdf: /pdf/6f006c4df4e10ea417ce36318a4f4a9a27515d57.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
XAGen: 3D Expressive Human Avatars Generation | Accept (poster) | Summary: The paper proposes a generative model for 3D expressive human avatars. Based on the backbone of the recent works, tri-plane 3D feature representation, volumetric rendering, transformation from pose space to canonical space, parametric body models and GAN, the authors propose to improve the expressive of the model by representing the hand and face with extra tri-planes and supervising them with extra adversarial loss. The result achieves state of the art performance according to various evaluation metrics on various dataset. Moreover, one can animate the hand and jaw of the generated avatar, surpassing most SOTA approaches. Two practical applications of such a generative model are demonstrated in the paper.
Strengths: 1. The paper presents a state of the art generative model for 3D clothed human avatar with expressive control.
2. The paper is solid, well written and easy to follow.
3. The evaluation is extensive and the results are convincing.
4. Two practical applications are demonstrated.
Weaknesses: 1. Lack of technical novelty, multi-tri-plane and part focused supervision are quite weak when claimed as novelty. But I'm relatively ok with this since the performance and the control of the model surpass the SOTA. Although in my opinion, this paper fits much more for SIGGRAPH/Asia, where technical novelty is less expected compared to NeurIPS, I believe the overall quality of the submission clearly reaches the acceptance standard for NeurIPS.
2. While I appreciate the effort on making consistent and good format of references (I notice the authors use consistent form of names of the same conference across the reference, which is rare nowadays in most of the submissions), there is still room to improve the reference, such as
a. Capitalization of words. For example: 3d -> 3D, Icon: implicit... -> ICON: Implicit..., Avartargen: a 3d... -> AvartarGen: A 3D...,
b. the URL in the references is not necessary.
c. [65] In ECCVw, 2023 -> In ECCV Workshops, 2022
3. In the supplementary video,
a. the voice over contains environment noise and echoes. It would be great to remove them.
b. the bottom left animation has a white background, occluding the slide contents. It could be better if those animations were rendered with transparent background.
c. the demonstration of controlling jaw motion and hand poses are a bit long and not eye catching. It would be more appealing if they can be shortened a bit and shown with a close-up / zoom-in view.
4. On the project page, the RGB video and the geometry visualization are asynchronized regardless of refreshing the page.
5. Suggestion. Please consider incorporating some concurrent work into related work for the updated version to make the paper more inclusive, for example: Chupa (https://snuvclab.github.io/chupa/), a generative model for 3D clothed human geometry; and SCARF (https://yfeng95.github.io/scarf/), creating an expressive avatar from monocular video.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The submission is in a good state, method clearly stated, experiment is extensive and convincing. I don't have further questions to the authors regarding reviewing this paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations and societal impact are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and recognition that 1) our method achieves state-of-the-art results; 2) our work is solid and its overall quality is good; 3) our experiments are extensive and convincing. We respond to each of your comments one by one in what follows.
> **Weakness 1**
We would like to thank the reviewer for the recognition that our overall quality is good for this conference. While we acknowledge the individual components used in our work have been explored in previous studies, combining and adapting them for generative and expressive 3D human modeling is a challenging and non-trivial task. Our sophisticated design not only generates high-quality 3D fully animatable avatars but also achieves clear improvements in face and hands generation/animation. Moreover, our method facilitates various downstream applications, like text-guided synthesis and video/audio-driven animation. Thus, we believe integrating all the important modules coherently and demonstrating their effects through a comprehensive ablation study constitutes a valuable contribution, as also highlighted by Reviewer 3QMX.
> **Weakness 2**
Thanks for the recognition and suggestions, we also appreciate your careful observation. We have modified the reference section accordingly and will update the final version.
> **Weakness 3**
Thanks for the useful suggestions, we will update the supplementary video accordingly in the final version.
> **Weakness 4**
We think this could be an issue with the browser. It is recommended to wait until all the samples are fully downloaded and then refresh the page. Also, we will further compress the gif files and update the webpage later to enhance the experience.
> **Weakness 5**
Thanks for the suggestion. We have already added the above concurrent works into the related work. We will update our final version accordingly. | Summary: The paper proposes a method for the generation of high quality, articulable 3D avatars of humans. The method proposed builds on top of the 3D GAN framework which has been used to learn to generate 3D articulable human bodies from collections of 2D images of humans. Additionally, the method adopts the proposed high-level methodology of generating bodies in a canonical pose and then using an explicit deformation guided by a parametric body model in order to render images in a desired pose. The novel contribution of the method is unifying this generation architecture not only for bodies, but for faces, bodies, and hands, and demonstrating that generating, articulating, and discriminating these body parts individually results in a higher quality result than handling them all simultaneously. The paper describes the architecture which accomplishes this, and demonstrates that this results in a state-of-the-art generation quality for 3D human avatars. The contributions of modeling and applying losses to these body parts separately are ablated, demonstrating that they are responsible for improvement in the quality of generated avatars.
AFTER REBUTTAL:
I have read the authors' rebuttal. I believe the additional comparisons provided for the hand and jaw quality address my weakness there. As I mentioned, I still have some questions about the novelty of each of the components, but I think the method performs state-of-the-art overall and combining all of these components into a working method is important for the community. Thus, I am leaning positively.
Strengths: In my opinion, the main strengths of the paper are that:
1. The paper is well written and is clear to read and follow, and presentation is extremely polished. As a result, I feel like the method will be impactful and those working in the field of generative 3D articulable avatars will want to build on top of it.
2. The proposed method demonstrates state-of-the-art results. The evaluations are extremely convincing both qualitatively and quantitatively, and compared to the baselines EVA3D and AvatarGen which are currently generating the state-of-the-art, I believe the generated results are significantly better. This is extremely important in pushing towards photorealistic generative 3D human avatars.
3. The method is ablated well. The main contributions: separately modeling faces and hands, and using a separate discriminator for each of them, are both ablated in a very clear way showing how they improve the performance of the method. This is extremely important to understand that the contributions proposed are actually responsible for the increase in quality, rather than just hyperparameter tuning.
Weaknesses: In my opinion, there are two weaknesses of the paper.
1. The evaluation accuracy of the facial/hand articulation is not entirely convincing.
- While Table 2 shows that there are improvements in “Jaw” and “Hand” when a facial and hand pose estimator is applied to the generated results, it is only compared to the modified AvatarGen with SMPL-X (which it outperforms). In order to actually ground these numbers with a baseline, it would be insightful to apply these pose estimators to the results of EVA3D and ENARF. I understand these methods don’t model hand or facial poses, but it would be an important comparison to understand how much improvement is gotten from even modeling these at all.
- Quantitative comparisons don’t ever show multiple different identities with modified facial or hand poses (for example, in the supplementary video). Because of this, I don’t know if the examples provided (for example, mouth open), are cherry-picked results or consistent. It would be immensely helpful to show a result where the facial/hand pose is held constant in a deformed state (for example, mouth open), and then the identity latent space is walked through.
2. The amount of method novelty is relatively limited. While the results are very nice and this is an important contribution, the method is based off of the same framework as ENARF/GNARF/EVA3D/AvatarGen where bodies are generated in the canonical pose and then deformed, and the tri-plane representation is used for volume rendering. While explicitly modeling hands and faces is important, it seems like a combination of methods like GNARF which have already been applied to heads, and just putting them in the same model. The additional results with text-driven generation or audio-driven manipulation are very cool, but are not technically novel and are instead a combination of open-source models (and can be applied to any other generative articulable 3D representation).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. It is mentioned that the left and right hand are represented with the same set of tri-planes but mirrored. Does this limit the capacity in the generation, i.e. when there are different accessories on either hand?
2. Is truncation used when sampling the generated results from the method? Is it used for sampling for the baseline methods, such as AvatarGen and EVA3D? This should be standard for evaluating qualitative results.
3. I assume this is using the same representation as AvatarGen, but why is it not described in the methodology. Is the representation a NeRF? An SDF? How is volume rendering done?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The limitations of the method have not been explicitly addressed in the paper, although they are addressed in the supplementary information. In my opinion, including some of the limitations, especially surrounding the quality of the datasets and SMPL-X estimators (and how this actually affects the quality of the generated results) would be insightful (potentially more-so than the added applications, such as audio-driven acting or text-driven generation).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and recognition that 1) our method has SOTA results; 2) our method is ablated well. We respond to each of your comments one by one in what follows.
> **Weakness 1.1**
Thanks for the suggestions, we have computed these metrics for ENARF and EVA3D. The results are shown in Table 1 of our rebuttal PDF file. In general, the results for ENARF and EVA3D are worse than ours because they don't have explicit control over these attributes. However, there is an outlier for ENARF MSE Jaw on UBC dataset. We think the reason is that most of the faces in this dataset have a zero jaw pose. And the images generated by ENARF have a very low resolution and the mouth is always blurry. In this case, the pose estimator would tend to predict zero jaw poses, which leads to the lowest Jaw MSE.
> **Weakness 1.2**
We visualize more results using different identities and identical mouth open and hand poses. Also, we visualize the random walk results and keep the mouth and hand pose unchanged to verify the consistency of our results. The results can be found on the left of Figure 3 in our rebuttal PDF file.
> **Weakness 2**
We would like to thank the reviewer for the recognition of our results and downstream applications. Although the frameworks mentioned by reviewer could be applied for different 3D generation tasks, these straightforward extensions cannot guarantee high-quality faces or hands simultaneously in one generation model. While we acknowledge the individual components used in our work have been explored in previous studies, combining and adapting them for generative and expressive 3D human modeling is a challenging and non-trivial task. Our sophisticated design not only generates high-quality 3D fully animatable avatars but also achieves clear improvements in face and hands generation/animation. Moreover, our method facilitates various downstream applications, like text-guided synthesis and video/audio-driven animation. Thus, we believe integrating all the important modules coherently and demonstrating their effects through a comprehensive ablation study constitutes a valuable contribution, as also highlighted by Reviewer 3QMX.
> **Question 1**
We conduct an ablation study on the hand Tri-planes, and the results are shown below:
| Hand Triplanes | FID↓ | FID_f↓ | FID_h↓ | PCK_h↑ | Hand↓ |
|-|:-:|:-:|:-:|:-:|:-:|
| double | 8.32 | 10.12 | 20.53 | 39.64 | 3.27 |
| single | 5.88 | 10.06 | 19.23 | 38.53 | 3.30 |
|||||
We can see that using two hand Tri-planes can improve the control ability of hand, but image fidelity decreases. We think the reason is that generator learns to generate an additional hand Tri-plane, which is difficult to optimize. We agree that independent hand Tri-planes can model different accessories, and we observed this during training (Figure 1(c) in our rebuttal PDF file). However, accessories usually have small scales. Compared with the full body image, this point is less important and will not affect the FID score too much. This is confirmed by the FID score for hand.
> **Question 2**
We didn't use any truncation for qualitative results. To make fair comparisons, we disable truncation for all the baseline models. We will point this out in our paper.
> **Question 3**
It is correct, we use the same representation as AvatarGen. We describe this in lines 158-159 of our main text. Our 3D representation is Tri-plane, with SDF as the proxy for geometry. We mention this in line 101 of our main text. Our volume rendering process is identical to NeRF. We will modify the main text to explicitly point these out in the final version.
> **Limitations**
Thanks for the suggestions, we conducted an ablation study to investigate the effects of SMPL-X estimations. We randomly sample a random noise and add it to the clean SMPL-X parameters to make a noisy version of the training dataset. The experiment results are summarized below:
| SMPL-X | FID↓ | FID_f↓ | FID_h↓ | PCK↑ | PCK_f↑ | PCK_h↑ | Exp↓ | Shape↓ | Jaw↓ | Body↓ | Hand↓ |
|-----|:------------------:|:-------------:|:----------:|:------------------:|:-------------:|:----------:|:------------------:|:-------------:|:----------:|:-------------:|:----------:|
| noisy | 8.61 | 10.48 | 19.55 | 62.17 | 90.48 | 31.14 | 6.78 | 3.88 | 7.45 | 1.51 | 3.75 |
| clean | 5.88 | 10.06 | 19.23 | 65.14 | 91.44 | 38.53 | 5.56 | 3.66 | 6.57 | 1.24 | 3.30 |
|||||||
As we can see, noisy SMPL-X decreases the model's performance in terms of all the evaluation metrics. It will mainly affect the control ability and full body image fidelity. Therefore, a more precise SMPL-X estimation method may largely improve control ability and full body image quality. We will discuss this in the limitation section and move it from supplementary material to the main text.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response to my points. I do not have any additional questions. Overall, I find that the justification for all of the contributions is there, and while none of the components are entirely novel, the system as a whole does enable improved performance, especially for hands and jaw. Thus, I am leaning positively.
---
Reply to Comment 1.1.1:
Comment: We are glad that all of the reviewer's questions have been answered. Thanks for your valuable time! | Summary: The authors proposed novel part-aware sampling and feature parametrization strategies to improve the fidelity of the avatar model, especially for smaller body parts. These techniques s enable the efficient learning of diverse fashion shapes with a focus on the hand and facial details. Through experimental evaluation of several benchmarks, the method surpasses all baselines including the recent work AG3D for a huge margin. The results emphasize the necessity of introducing discriminator techniques. Moreover, the authors demonstrate control with text and audio-generated poses as a possible application of their model.
Strengths: I appreciate the authors' effort in presentation results and getting fair comparisons with baselines.
- Undoubtedly, the proposed technique for discriminative learning is important and interesting - easily transferable to other models, which is clearly well studied
- The authors suggest using separate representations for significant parts - hands, face, and body as a simple technique to drastically improve the capacity of the model
- Comparison presented on three data unrelated to the training corpus.
Weaknesses: - some important architectural choices are not studied. The original idea of sepration face and body doesn’t have cleare explanation or intuition to separate face from the body for features (not in discriminator)
- The ideas lying in the core of the method are quite unpretentious, which is not bad but it decreases the novelty and impactful of the model itself.
- ethical and limitation sections are missing in the main text
- Animations are only shown for synthetic pose sequences. Try to extend it with real sequences (e.g. AMASS), the body movements can improve the credibility of the method.
- No images from the competitor, who utilizes the part-based discriminator as well - AG3D - ask authors to sample more images for you.
- SMPL-based deformation model will affect loose clothing that is not covered in the main text
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Why do you condition the discriminator for shape and expression (and not pose to handle hands/skeleton deformations or just clear this moment in the main text)? Is it important if so, could you add reference numbers for that?
- Is it possible to compare the methods for human preferences score, since metrics here are not ideal measures? At least a more careful qualitative evaluation can be a good extension it is difficult to see the difference in Figure 4. Is it possible that FID results here do not correlate well with visual ones?
- What if the size of the face and hands tri-plane will be downsampled twice more?
- Does geometry contain any facial attributes related to expression or fingers?
- What is happening with loose clothing since you have quite a simple deformation model?
- How good is this part-based scheme for image inversion - do you have any results of intuition in comparison with EVA3D or AvatarGen.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: It is difficult to handle wrinkles or accurate textures as well as loose cloth. Moreover, the demonstrated method is less efficient due to the extended representation. Due to the fact that the model is trained without video data, it is not possible to definitively determine the correspondence of pixels can be incorrect. The trained model has a huge bias to fashion poses due to training data that should be taken into account with all conclusions from the model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and constructive suggestions. We respond to each of your comments one by one in what follows.
> **Weakness 1**
Our motivation of separating face and hands from body features is to increase the resolution of Tri-planes and improve the model's capacity for these small-scale regions. This design choice has been confirmed by our ablation studies in Table 3(a) of our main text. When we add two separate Tri-planes for face and hand, we achieve the best results in terms of visual fidelity.
> **Weakness 2**
While we acknowledge the individual components used in our work have been explored in previous studies, combining and adapting them for generative and expressive 3D human modeling is a challenging and non-trivial task. Our carefully designed XAGen not only generates high-quality 3D fully animatable avatars but also achieves clear improvements in face and hands generation/animation. Thus, we believe integrating all the important modules coherently and demonstrating their effects through a comprehensive ablation study constitutes a valuable contribution, as also highlighted by Reviewer 3QMX.
> **Weakness 3**
Thanks for pointing this out, we will move these two sections from supplementary material to the main text.
> **Weakness 4**
We have generated animation results using the sequences sampled from the AMASS dataset. The image results can be found in Figure 1(b) in rebuttal file. We have also sent the link to video results to the AC. In addition, our animation results in initial submission are driven by the sequences from TalkSHOW dataset, which are the tracking results of realistic videos instead of synthetic.
> **Weakness 5**
We have contacted the authors for the AG3D results. The qualitative comparisons can be found in Figure 2(b) in the rebuttal file.
> **Weakness 6**
We agree that SMPL-based deformation will struggle with modeling loose clothing. Loose clothing is a long-standing challenge for 3D human modeling. We believe an advanced human body prior or independent clothing modeling is helpful to alleviate this issue. We will add the discussion of this point into the limitation section in our main text.
> **Question 1**
We choose these designs experimentally. To investigate these choices, we conduct additional ablation studies and the results are shown below:
| Expression | FID_f↓ | PCK_f↑ | Exp↓ |
|-|:-:|:-:|:-:|
| w/o | 9.57 | 91.43 | 5.86 |
| w/ | 10.06 | 91.44 | 5.56 |
|||||
*Expression:* Although adding expression condition slightly decreases the FID of face, it will improve the control ability in terms of facial expression.
| Body Pose | FID↓ | PCK↑ | Body↓ |
|-|:-:|:-:|:-:|
| w/ | 14.23 | 66.69 | 1.00 |
| w/o | 5.88 | 65.14 | 1.24 |
|||||
*Body Pose:* It can be observed that conditioning on body pose will significantly affect image quality. Although it can improve the control ability for body pose, the quality decrease is too large.
| Hand Pose | FID_h↓ | PCK_h↑ | Hand↓ |
|-|:-:|:-:|:-:|
| w/ | 19.57 | 40.07 | 3.59 |
| w/o | 19.23 | 38.53 | 3.30 |
|||||
*Hand Pose:* We can see that conditioning on the hand pose will not affect the visual quality too much. Though the PCK drops, the MSE for hand control is improved. Thus, we skip it to save the computation cost.
> **Question 2**
Yes, we conducted human studies for qualitative comparisons. The results are summarized in Table 2 of our rebuttal PDF file. We can see that our method outperforms baselines in terms of texture and geometry quality on four datasets. The human preference scores are in line with the FID scores reported in Table 1 of our main text.
> **Question 3**
To answer this question, we conducted ablation experiments accordingly. The results are shown in the Table below:
| Triplanes | FID↓ | FID_f↓ | FID_h↓ |Exp↓ | Jaw↓ | Hand↓ |
|-|:-:|:-:|:-:|:-:|:-:|:-:|
| half | 7.05 | 9.59 | 20.33 | 6.17 | 7.03 | 4.11 |
| full | 5.88 | 10.06 | 19.23 | 5.56 | 6.57 | 3.30 |
|||||
It can be observed that reducing the size of face and hand Tri-planes twice will cause a performance decrease in terms of full body and hand quality. Also, we can see a performance drop for expression, jaw pose, and hand pose control. These results show that a smaller Tri-plane will decrease the capacity of our generator and decrease the control abilities.
> **Question 4**
We tried our best but still couldn't understand this question. We are glad to further answer this question during discussion period.
> **Question 5**
As shown in the below figure in Figure 6(b) of our main text and the right figure in Figure 1(b) of our rebuttal file, when it comes to loose clothing (e.g., dress), our generator will learn to separate the dress and attach it to the two legs. When the avatar opens its legs, the long dress will also open from the middle. The reason is that our generator adopts the nearest neighbor LBS and each query point will be attached to its nearest SMPL-X vertex in inverse skinning.
> **Question 6**
We report the inversion results in the right above of Figure 3 in rebuttal PDF file. Compared with EVA3D, our result has a better hand and a more detailed geometry. Compared with AvatarGen, we can inverse the T-shirt collar better and we also exhibit a more detailed geometry.
> **Limitations**
Thanks for pointing these limitations out, we will discuss these points in our limitation section and move this section from supplementary material to the main text.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing all my points with valuable and informative answers. I hope you will add it to the main text. Q4 was about small high-frequency details like wrinkles or fingers. I want to save my original score (accept).
---
Reply to Comment 1.1.1:
Comment: Thanks for your time and efforts. We are glad that all of the points have been addressed, and we will include the above experiments in the main text of our final version.
Answer to Question 4: As shown in Figure 2 and Figure 3 (right below) in the rebuttal.pdf, our results show reasonable and detailed geometry for facial expression (clear eyes, nose, and lips) and fingers, while none of the baseline models demonstrate comparable details. We also show dynamic results for opening mouth in the video of our supplementary material. However, wrinkles are more challenging than expressions and fingers. Our method has limitations in modeling wrinkles and we will discuss this in the limitation section. Moving forward, we plan to explore better representations (with stronger capabilities for high-frequency details) and use stronger supervision signals (e.g. normal map) to improve the details.
We hope this could answer your question. | Summary: The paper presents a framework for human 3D avatar generation. The whole model is trained on a set of 2D images, thus can support large variations in terms of shape. The framework is based on EG3D with several important modifications:
1) Incorporation of the inverse LBS, that will deform the canonical representation of the object.
2) Modeling hands and face with separate triplane branches.
3) Additional part based discriminators for face and hands.
The paper demonstrates good results, that include modeling of the face and hands motion. Although the motion of these parts are rather weak and contain a lot of artifacts.
POST Rebuttal Summary: The answers in the rebuttal makes sense, in the rebuttal authors provided additional results on sensitivity to SMPL-X errors. To this end I increased the rating by one, and encourage the authors to improve the presentation quality of the method and results.
Strengths: - The paper proposed a meaningful framework that combines several existing ideas into a single unified 3D framework.
- The paper demonstrates good results for the full body modeling.
- The paper contains rigorous evaluation of the different modeling aspects, as well as detailed comparison with sota methods.
Weaknesses: - The quality of presentation is not the best, the method is presented in such a way that it looks like a combination of InsetGan[1] and EG3D. Also visual results presented only on a single dataset, or it is not clear where the results on other datasets.
- Some architectural choices seem questionable. For example why nearest neighbor LBS, was used, instead of the more advanced LBS approximation from HumanNerf[2] or even more advanced SNARF[3]? Also EG3D, because the upsample produces visible inconsistencies in the generated results, why a more advanced training scheme, such as EpiGRAF is not used? The current LBS approximation will struggle with loose clothing that is not well represented with SMPL-X.
- The quality of hands and face is extremely poor. Why it is bad is not analyzed in the paper. Is this a dataset issue? Is it possible to use specific dedicated face/hand datasets in addition to human bodies dataset, and training only face and hand discriminators with them? Or is it an SMPL-X fitting problem? Again is it possible to use more specific hand/face models to improve this aspect?
[1] InsetGAN for Full-Body Image Generation https://arxiv.org/pdf/2203.07293.pdf
[2] HumanNeRF:Free-viewpoint Rendering of Moving People from Monocular Video https://grail.cs.washington.edu/projects/humannerf/
[3] SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes https://arxiv.org/abs/2104.03953
[4] EpiGRAF: Rethinking training of 3D GANs https://arxiv.org/abs/2206.10535
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Questions listed in weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: - Inconsistency in the results.
- Poor quality of the hand and face regions.
- Not very good presentation. Visual results only on a single dataset, no visual comparison with other methods in supplementary videos.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and helpful feedback. We respond to each of your comments one by one in what follows.
> **Weakness 1.1**
First, we agree with the reviewer that our presentation could be further improved. However, there do exist several significant differences between our framework and InsetGan/EG3D:
- InsetGan inserts face/hand images into the full body by optimizing the latent code of each pretrained part GANs to guarantee seamless merging. In contrast, our XAGen does not rely on any optimization or post-processing to merge the images of each part. Instead, our model learns to generate full-body images with plausible faces and hands in one forward pass directly.
- InsetGan is designed for 2D image generation and is not animatable. We believe it is non-trivial to directly apply InsetGan in the animatable 3D avatar generation.
- EG3D is an unconditional 3D-aware GAN model proposed for static face/object modeling, while our model is designed for generating fully-animatable 3D human avatars, which is more challenging due to its high articulation, complex clothes, diverse appearances, and small-scale face and hands. Thus, combining EG3D with InsetGan directly cannot achieve this easily.
- EG3D only synthesizes one single Tri-plane and renders the whole image with one volumetric rendering process. Differently, we propose a multi-part rendering approach together with multi-part discriminators to improve fidelity and controllability for body, face, and hand.
> **Weakness 1.2: visual results**
There may be a misunderstanding because we didn't add explicit captions in Figure 3 of our main text. Indeed, the first row in Figure 3 are generation results on the UBC dataset, and the second row is the results on DeepFashion dataset. We will modify this in the final version of our submission. To evaluate the performance of the other two datasets (SHHQ and MPV), we report more qualitative results in Figure 2 (a) of the rebuttal PDF file.
> **Weakness 2.1: architectural choices**
Thanks for the constructive suggestions. We start from the nearest neighbor LBS to implement our framework, and the experiment results show that this approach is simple yet effective. Since our work is focused on 3D avatar generation instead of a new skinning technique, we didn't further explore advanced techniques such as SNARF or HumanNerf. Moreover, SNARF and HumanNerf were originally designed for single-scene fitting, which may have challenges in terms of generalization ability. Yet, this direction is still worth further exploration, and we will leave this for future work.
> **Weakness 2.2: EpiGRAF**
This question is very insightful. EpiGRAF discards the super-resolution module and proposes a patch-wise training scheme for 3D generation models. Although patch-wise training can reduce the computation cost of each iteration, it also has disadvantages, such as missing global information of the image. This drawback could be problematic for our task because we rely on the body discriminator to guarantee plausibility and increase the consistency between each part (face, body, and hands). Similar to EpiGRAF, EVA3D, one of our baselines, also skips a super-resolution module and synthesizes 512X256 images directly by volume rendering. However, their fidelity and controllability are not comparable with ours. Yet, it is still meaningful to try this advanced training scheme, and we will leave this for future work.
> **Weakness 2.3: loose clothing**
We agree that the LBS approximation will struggle with modeling loose clothing. Loose clothing is a long-standing challenge for 3D human modeling, and it has not been perfectly resolved yet. Even HumanNerf and SNARF with carefully designed skinning techniques and working on single-scene fitting scenario cannot tackle this challenge well. In future works, we may adopt a more advanced human body prior or process clothing independently to address this issue.
> **Weakness 3**
This is a meaningful question. First, we agree that the dataset could be an issue because the quality of the faces and hands cropped from the fashion datasets are less diverse and have a much lower resolution than the commonly used face dataset, such as FFHQ. We believe our method can benefit from a more diverse dataset with higher resolution for face and hands. However, augmenting the training dataset using high-quality face dataset could be challenging because, in this case, there exists a distribution discrepancy between the full body images and face images. Nonetheless, this is an interesting idea and we will explore this direction in future works.
To investigate the SMPL-X fitting problem, we conducted additional ablation studies. We add Gaussian noises into the clean SMPL-X fitting results and train the model again. The results are reported in the Table below:
| SMPL-X | FID↓ | FID_f↓ | FID_h↓ | PCK↑ | PCK_f↑ | PCK_h↑ | Exp↓ | Shape↓ | Jaw↓ | Body↓ | Hand↓ |
|-----|:------------------:|:-------------:|:----------:|:------------------:|:-------------:|:----------:|:------------------:|:-------------:|:----------:|:-------------:|:----------:|
| noisy | 8.61 | 10.48 | 19.55 | 62.17 | 90.48 | 31.14| 6.78 | 3.88 | 7.45 | 1.51 | 3.75 |
| clean | 5.88 | 10.06 | 19.23 | 65.14 | 91.44 | 38.53| 5.56 | 3.66 | 6.57 | 1.24 | 3.30 |
|||||||
As we can see, noisy SMPL-X will not affect face and hand image quality too much. It will instead affect control ability and full body quality. Therefore, a more precise or specific SMPL-X/face/hand estimation method may improve control ability more than hands/face quality.
> **Limitations**
We will move the section for limitation discussion from the supplementary material to the main text, and discuss the limitations of our work thoroughly. Please refer to our responses above for the presentation issue and qualitative comparisons on other datasets. | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for their constructive and insightful feedback. We appreciate the reviewers' time and efforts spent on our submission.
Please check our rebuttal PDF files uploaded here for the additional figures and tables. We have sent the video results on AMASS data to AC following the official instructions.
Please check the comments for each review, and feel free to ask if you have any questions about the rebuttal. We are glad to have further discussions with all the reviewers.
Pdf: /pdf/7415daf3357a63b13d0e94a65f17f11667aaa432.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work addresses the problem of 3D full body avatar generation, going beyond prior work primarily on more detailed hand and face generation quality and controllability.
The pipeline comprises a 3D-aware GAN where the generator generates tri-plane feature maps from noise vector, followed by a 3D decoder that generates the human geometry represented by SDFs, in a canonical pose. The canonical SDF is then posed (with the help of SMPL-X body model) and rendered to be discriminated about the image discriminators.
The key contributions of the work is a complete pipeline to model the face and hand regions separately, equip them with controllability (e.g. hand pose and facial expression), via the body model, and finally unify them to the full body generation. This effectively addressed the limitation of prior work, namely the compromised hand/face quality due to their relatively small region in most training images. Extensive experiments show the edge of this work over other recent human generators. The authors also demonstrated the usefulness of a full-body avatar generative model in the applications of test/audio-driven avatar synthesis.
Strengths: - This work provides a complete pipeline for controllable 3D avatar generation, where the components in this pipeline are combined in an intuitive and effective way. Speaking of novelty, although the tri-plane-based 3D GANs are common nowadays and the body-model-aided reposing mechanism is also well-known, I can't come up with other work in this line that well fixes the problem of hand/face quality and controllability. Therefore, the full method makes a good technical contribution to the avatar generative modeling.
- Experiment section is strong, and the results are impressive. It incorporates major recent methods on controllable avatar generation and show a clear performance edge in terms of the general coherency of geometry, rendered image fidelity, pose controllability and sharpness on the face/hand region, validating the effectiveness major technical contribution.
- The paper is well-written, being clear and easy to follow. The main paper is mostly self-contained, and the SupMat is also very thorough.
Weaknesses: - The generated geometry has limited resolution, are prone to artifacts, and in most of them, the face/hand region are missing details that are present in the textured renderings. Despite the careful handling of hand/face regions, the geometry of these areas generally show a lower quality than the texture. In other words, although the work targets geometry generation, the model often "covers up" what should be geometric structures with texture.
- Although the generated avatars can indeed be animated (via SMPL-X) as claimed, there seems to be no mechanism that ensures the physical correctness (e.g. semantics of body parts and cloth) or visual plausibility (e.g. pose-aware cloth deformation) of the animated results. For example, when the generated character has self-contact (e.g. Fig 3, lower right "Ours", the subject that touches the face), the contact area are merged as if they are connected (please correct me if wrong); consequently, when animating this subject with novel poses (e.g. arms-open), visual artifacts can appear.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - The hand, face and body feature maps are generated together in a channel-stack manner. Intuitively, the features of these three different parts are not spatially aligned, and generating them with a same generator might not have other benefits than saving computational cost. Is it investigated whether training separated generators for these 3 parts brings better quality?
- For the face region specifically, AvatarGen also employs a dedicated descriminator, similar to the one used in this work, and yet performs poorly in terms of plausibility of facial detail and expressions. What is the key differentiator in the method that brings such salient improvement?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The techinical limitations and potential societal impacts are discussed in detail in the SupMat, I really appreciate that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and recognition that 1) our pipeline is effective, and it makes a good technical contribution to the avatar generative modeling; 2) our results are expressive and show a clear performance edge. We respond to each of your comments one by one in what follows.
> **Weakness 1**
This is a very insightful point. In our pipeline, we apply a super-resolution module to enhance the coarse texture results given by the volumetric rendering, while there is no super-resolution module for geometry. Thus, it is true that the generated geometry has a lower resolution. Based on our current pipeline, there may exist two possible solutions. First, we increase the volume rendering resolution for geometry and show the comparison results in the right below figure in Figure 3 of the rebuttal PDF file. It can be observed that, with an increasing rendering resolution, we can see more geometric details on the face and dress. It demonstrates that our model can improve geometry by using a larger rendering resolution. Yet, this technique can not fully address this issue, and the second solution is to apply a real upsampling on the 3D representation to enhance the details of geometry. We would leave this for future work.
> **Weakness 2**
- To study the issue of contact area, we rotate the samples shown in Fig 3 in our main paper and render more results from multiple directions. As shown in Figure 1 (a) of our rebuttal file, we rotate the avatar by 50, 110, 220, and 330 degrees respectively. The results demonstrate that the hand is not connected or merged with the face.
- It is true that when animating the subject with novel poses like arms-open, we could observe visual artifacts. The main reason for this phenomenon is that our training dataset contains only fashion data, whereas arms-open poses are less frequent in the dataset. We believe a more diverse dataset with in-the-wild poses would help alleviate this issue.
- It is our limitation that our model does not incorporate physical correctness to improve visual plausibility. We agree that these constraints are helpful, and we will incorporate physical/penetration constraints to further improve the animation results in future works.
> **Question 1**
This is a good question. First, we have some design choices to help align the features of different parts: (1) As introduced in our main paper (lines 163-167) and Appendix (Section 1.1), we compute the feature for the query points which are located in the overlap regions from two related Tri-planes, and then composite them to increase the transition smoothness and plausibility of the full body image. (2) Our body discriminator is trained on full body images to critique the generation result. It supervises the learning of full body images and the gradients can be backpropagated to not only the body but also face and hand Tri-planes to increase generation quality and help align the features spatially.
To study the advantages of using a shared generator branch to generate multi-part features, we conduct additional ablation study experiments. We use separated generator branches for hand, face, and body respectively. The results are summarized in the Table below:
| Generator | FID↓ | FID_f↓ | FID_h↓ |
|-----|:------------------:|:-------------:|:----------:|
| separated | 7.65 | 12.20 | 21.03 |
| shared | 5.88 | 10.06 | 19.23 |
|||||||
The results show that using separated generators cannot improve generation quality. We think the reason would be the redundancy in separated generators. The redundancy increases the computation cost and hinders the optimization of the generator.
> **Question 2**
The differences between ours and AvatarGen are:
- The rendering process of face is different. AvatarGen crops face images from synthesized full body images whereas ours use face camera poses to render the face images directly from face Tri-planes. Compared with AvatarGen, our independent rendering process can disentangle the learning of face and body, which reduces the training difficulty on 2D image datasets.
- Our multi-part discriminators have stronger supervision on face than AvatarGen. In our pipeline, we have face discriminator and full body discriminator. We, therefore, compute adversarial loss terms for both full body and face. And both full body discriminator and face discriminator critique the face regions and supervise the synthesis of faces, which further enhances face quality.
- Our face discriminator is conditioned on facial expressions while AvatarGen does not use such conditions. Thus, our discriminator leverages the prior knowledge provided by SMPL-X to supervise the training of generator. It can help enhance facial details (i.e., expressions) of the generated faces.
The above three reasons work together to bring a large improvement in the face synthesis of our pipeline.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. The rebuttal has addressed my concerns. I also share a similar opinion with other reviewers in several aspects. Indeed the technical contribution can roughly be seen as a insightful combination of existing techniques, but the results are nice, surpass the SOTAs, and the study is comprehensive. Therefore I'd keep my original recommendation for acceptance. As mentioned by other reviewers, it would be helpful to include at least some of the limitations in the main text. Please include the extended experiments and illustrations from the rebuttal in the final version. Great job!
---
Reply to Comment 1.1.1:
Comment: Thanks for the reviewer's valuable comment. We are glad that we have addressed the concerns. We will add the limitation section, extended experiments, and illustrations in the main text of our final version. | null | null | null | null | null | null |
Algorithmic Regularization in Tensor Optimization: Towards a Lifted Approach in Matrix Sensing | Accept (poster) | Summary: This paper examines the role of gradient descent in inducing implicit regularization for tensor optimization, within the lifted matrix sensing framework. The authors show that with sufficiently small initialization, gradient descent applied to the lifted problem results in rank-1 tensors and critical points with escape directions. These findings show the significance of tensor parameterization for matrix sensing, combined with first-order methods, in achieving global optimality.
Strengths: - Seems well-written overall.
- Has some useful insights, like "our findings indicate that when applying GD to the tensor optimization problem (3) an implicit bias can be detected with sufficiently small initialization points."
- And also they show that, "...when initialized at a symmetric tensor, the entire GD trajectory remains symmetric, completing the requirements", which is interesting.
Weaknesses: - Needs more convincing or discussion that the theory can be useful for practical applications. I find the theory interesting, but unsure of the practicality. Of course, it's nice that they give an experiment on a two-layer neural network with quadratic activations, but I'm not sure quadratic activations have been used all that much practically. Also, the last "layer" of the neural network just sums the outputs of the "hidden" layer, which is not as general. I believe this is the biggest weakness of the paper.
- Hyperparameters for the different optimizations algorithms used are not found. This makes it hard to believe that the authors made a good effort to optimize the unlifted problem via perhaps a hyperparameter search.
- For Section 6.2, the experiment is under "low-sample" conditions, but this needs to be quantified and perhaps an experiment where there are "high-sample" (or at least non-"low-sample") conditions may be of interest just for completeness.
- A minor point: While I did look over the proofs in the appendix, the number of pages of proof in the appendix is a bit long at 21 pages. It might help the paper to condense the proofs.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Line 27-29: What are the practical applications of two-layer neural networks with quadratic activations?
- Line 137: I'm assuming FOP stands for "first-order point", and this acronym should be explicitly clarified.
- Line 232: "fist" to "first"
- Line 278: "that wether" to "whether"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - The authors do not provide an explicit discussion of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed comments and constructive suggestions. The following are our responses to the review comments:
(1) We agree with the reviewer's observation that a two-layer neural network with quadratic activation alone is not a widely used model in modern day machine learning and solely focusing on it will not greatly benefit the machine learning community. However, the intent of this paper is not to provide a "trick" that only targets specific models, and instead we intent to theoretically prove that when optimizing over tensors for low-rank problems, which is an action that is implicitly done in deep neural networks including CNN [1], implicit regularization can be expected and it will eventually lead to better optimization results in the process. We thank the reviewer for raising this important question, and will take this opportunity to elucidate the importance of this work and its applicability in more detail:
- Firstly, this paper focused on the general problem of matrix sensing, which includes many direct applications like medical imaging, matrix completion/recommendation systems, motion detection, and power system state estimation [2-4]. Since many of these applications are safety-critical, it is often desirable to search for the global optimum. It has been previously shown that a tensor parametrization can be helpful in pursuit of the global solution in a special case [5], and therefore we aim to prove that no explicit constraint is needed with this parametrization due to the implicit regularization effect. In this work, we focused on two specific problems to streamline the presentation, but the theoretical analysis can be readily extended to all matrix sensing problems, thereby providing merit to a wide variety of applications.
- Secondly, this problem has been extensively studied in the literature as an instance of non-convex optimization, to further promote the understanding of non-convex optimization and the process of training (Please see Section 1.1 for more references). This is why it is important to focus on this specific benchmark problem, since it will advance the current understanding of the interplay between over-parametrization and non-convexity. We find that over-parametrization is only a part of the story, since an appropriate optimization algorithm that can induce implicit regularization is also key to mitigating non-convexity. Enlarging the parametrization space is very similar to increasing the number of layers in a neural network, and therefore this work provides a different angle to the usefulness of large/deep neural networks, even if three-layer networks are already universal approximators [6], and to the problem of how to best utilize the rich parametrization via a random initialization.
We will add more discussions and simulations to the paper to show the importance in applications such as state estimation.
(2) For the quadratic neural network training process, the only hyper-parameter that is involved is the initial learning rate (since we use the ADAM optimizer), which is set to the default value of 0.001. We appreciate the reviewer's suggestion and tried out the success rate for the same problem described in Table 2(a) with n=8, and obtained the following values:
| n=8, unlifted | lr = 1e-5 | lr=1e-4 | lr = 1e-3 | lr = 1e-2 | lr = 0.1 |
| -------- | ------- | ------- | ------- | ------- | ------- |
| m=20 | 0 | 0 | 0 | 0 | 0 |
| m=30 | 0.1 | 0.3 |0.3 | 0.4 |0.2|
| m=40 |0.1 | 0.4 | 0.5 | 0.6| 0.2 |
The above table shows that the initial learning rate in this setting does not affect the success rate too much, and will not make the unlifted problem have comparable performance to the lifted problem.
As for the "low-sample" and "high-sample" terminology, it simply denotes the value of $m$, as $m$ is the number of observations. It is well known in the literature [2] that when $m$ is very large and the sensing matrices are sampled i.i.d. from a normal distribution, the RIP constant of the problem will be close to 0, and all local minimizers will correspond to global solutions, meaning the problem can be solved globally. We thank the reviewer for pointing it out, and we will provide more explanations in the main text should the paper be accepted.
(3) We appreciate the reviewer's suggestion of condensing the appendix, and will try to make it better compartmenlized, so that readers only need to refer to short sections at a time. We also thank the reviewer for pointing out a few typos, and we will fix those as well.
[1] N. Razin, A. Maman, and N. Cohen, “Implicit regularization in hierarchical tensor factorization and deep convolutional neural networks,” in International Conference on Machine Learning, pp. 18422–18462, PMLR, 2022.
[2] L. T. Nguyen, J. Kim, and B. Shim, “Low-rank matrix completion: A contemporary survey,” IEEE Access, vol. 7, pp. 94215–94237, 2019.
[3] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: A contemporary overview,” IEEE Signal Processing Magazine, vol. 32, no. 3, pp. 87–109, 2015.
[4] Y. Zhang, R. Madani, and J. Lavaei, “Conic relaxations for power system state estimation with line measurements,” IEEE Transactions on Control of Network Systems, vol. 5, no. 3, pp. 1193–1205, 2017.
[5] Z. Ma, I. Molybog, J. Lavaei, and S. Sojoudi, “Over-parametrization via lifting for low-rank matrix sensing: Conversion of spurious solutions to strict saddle points,” in International Conference on Machine Learning, PMLR, 2023
[6] Wang, Ming-Xi, and Yang Qu. "Approximation capabilities of neural networks on unbounded domains." Neural Networks 145 (2022): 56-67.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer rWmQ,
Firstly, I would like to express my sincere gratitude for the invaluable feedback you provided on our NeurIPS submission. We have carefully reviewed your comments and believe that we can address the major concerns in our revised paper.
Having said that, if you have had the opportunity to go through our responses and still have any additional concerns or suggestions, we would greatly appreciate hearing them. We are committed to improving the quality of our work and would welcome the chance to further discuss any points of contention or areas of improvement with you.
Thank you once again for your time and insights. We look forward to your continued guidance.
---
Rebuttal Comment 1.2:
Comment: Thank you for your response.
(1) Your first bullet-point makes sense, thank you. For the second bullet-point, I understand and empathize with the author's appeal to focus on common benchmark problems and I agree that they should be studied. But I believe one or two benchmark problems that have been exhaustively studied can have its limitations. Both because they are exhaustively studied, and because it seems too few. I believe it would help the author's argument if there were many more (common) benchmark problems, and/or a more practical benchmark to go alongside the exhaustively studied ones.
In the rebuttal, you mentioned:
> "We will add more discussions and simulations to the paper to show the importance in applications such as state estimation."
This would be great and would help convince me in the potential applications of the method.
(2) Thank you for your experiments, they have cleared up my questions about hyperparameters.
(3) Thank you.
---
Reply to Comment 1.2.1:
Comment: Thank you for your feedback and for carefully reviewing our work. We genuinely appreciate the time and effort you've invested.
(1) We concur with your observation regarding the potential limitations of solely focusing on the matrix sensing benchmark problem. While this paper serves as an initial exploration in this domain, our aspirations extend to broadening the analysis to encompass a diverse array of problems, leveraging the robust theoretical underpinnings established here. The primary contributions of this paper can be distilled into two pivotal insights:
- First, gradient descent, when initiated with small values, exhibits a pronounced implicit bias. This suggests that models with expansive parametrization should commence with modest initializations to avoid undesirable critical points.
- Second, if the optimization trajectory remains close to a constrained representation space (e.g., rank-1 tensors as delineated in our study), it inherits several advantageous properties, akin to being wholly encapsulated within that constrained space.
Building upon these foundational observations, our future endeavors will focus on devising enhanced metrics and algorithms for larger machine learning models, including neural networks.
(2) We have begun exploring the use of parallel computing and high-performance machines to tackle large-scale lifted problems for more significant applications. Given that this is a new framework, developing fast, tailored algorithms is a challenge. We aim to complete simulations with larger sizes within a month and incorporate them into the next version of the paper.
In summary, we hope our clarifications address your concerns. We humbly request a re-evaluation of our work, keeping these clarifications in mind. Regardless of the decision, we are grateful for your review and the insights shared. | Summary: Studies the implicit regularization of gradient descent for a certain tensor optimization problem, obtained through the “lifted” matrix sensing framework. Lifting matrix sensing problems is a technique for transforming the original (non-convex) landscape into a new (still non-convex) landscape with favorable properties. In particular, spurious local minima are converted into strict saddle points.
The current paper establishes that, under near-zero initialization, gradient descent over the lifted matrix sensing problem leads to approximate rank one tensors until a certain time step. Furthermore, it shows that one can extract a first-order point for the original matrix sensing problem from approximately rank one first-order points of the lifted problem. Along the way, derives a lifting framework over symmetric tensors for matrix sensing problems with a ground truth matrix of arbitrary rank. Experiments corroborate the theory by demonstrating that optimizing the lifted problem may lead to better reconstruction than optimizing the original problem.
Strengths: 1. Establishes a novel implicit regularization of gradient descent for tensor optimization problems. In particular, as far as I am aware, this work is the first to show for the specific problem of (5) that gradient descent will lead to a trajectory that is approximately rank one until a certain time step.
2. Existing analyses of implicit regularization in tensor problems have focused on gradient flow (gradient descent with an infinitesimally small step size), while the current work supports the more realistic gradient descent with a finite step size.
3. The technical tools used, e.g. the v-eigenvalue definition, may be of use for future work on the implicit regularization in tensor problems.
4. The paper reads relatively well. Ample explanations and high-level intuitions are provided regarding the theoretical analysis.
Weaknesses: 1. There seem to be gaps in the theory that are not adequately addressed (or mentioned explicitly). Specifically:
- Theorem 1 applies for some k < 1. In principle, at least from the theorem statement, it seems that k can be close to one, in which case the tensor is not really approximately rank one. For Theorem 2, k needs to be upper bounded by a quantity depending on the target matrix. The matter of when we can expect this to hold is not treated.
- Theorem 1 and 2 establish that the tensor will be approximately rank one until some time step, and that first-order points of tensors that are approximately rank one, but do not correspond to a global minimum, have an escape direction. Yet, it is not guaranteed that we will reach a critical point while the tensor is still approximately rank one.
- In Theorem 1, it is not specified how small the initialization scale needs to be. Does the theory suggest a scale of practical size and not exponentially small in the problem parameters?
- In Theorem 2, is the rank one escape direction in the same direction of \hat{x}? Otherwise the rank could increase upon escaping, in which case the fact that we can map rank one tensors to first-order points of the original problem may be of less use.
Obtaining full characterizations for the dynamics of gradient descent in non-convex problems is challenging, and it is not out of the ordinary to have gaps in the theory. Yet, I firmly believe that such gaps should be explicitly discussed and perhaps reconciled via experiments (e.g. showing that indeed the tensor stays approximately rank one until convergence).
2. A large part of the motivation behind looking at the lifted problem (5) and analyzing the implicit regularization of gradient descent for it, is the possibility of achieving better reconstruction in matrix sensing problems. I find the empirical evidence provided unsatisfactory in establishing the importance of the lifted formulation. The matrix sensing experiments cover only a very specific perturbed matrix completion problem and matrix sensing problem with rank one measurements. Showing that using the lifted problem (5) is viable in more generic cases or real-world datasets can greatly boost the significance and interest of the results in my opinion.
Furthermore, for the lifting technique to be practical, $l$ cannot be too large as the size of the tensor grows exponentially with it. It is not clear whether this is limiting for typical matrix sensing problems or not from the current evaluations.
3. Empirical support for Theorem 1, which establishes that the tensor will be approximately rank one until a certain time step, is lacking. The experiments only examine the resulting performance after extracting a rank one tensor from the last gradient descent iterate. However, they do not show whether the tensor is approximately rank one as the theory suggests, or it has grown in rank (i.e. the norm of the residual after reducing the rank one component is non-negligible). If it is the former, it strengthens the viability of the theory, while if the latter holds, then it is unclear whether the theory indeed explains the observed empirical phenomenon. I recommend reporting, e.g., the norm of the tensor after reducing from it the extracted rank one component or some other measure of how close to rank one the tensor is.
Additional (more minor) comments:
- In the empirical evaluation, success rate is too crude of a measure in my opinion, as it is possible that the difference in reconstruction error is not large while the difference in success rate is. I believe comparing (normalized) reconstruction errors will allow for a better comparison of the performance of the lifted and unlifted techniques.
- I found the inner product notation (without any subindices) to be confusing. One would expect the output to be a scalar.
- Escape direction from a critical point is not formally defined.
- In Theorem 1, it is not specified what t_T stands for.
- Experiments are for a custom version of gradient descent that does not quite align with the theory. I would recommend clarifying briefly in words (even if in an appendix) what are the modifications it introduces and why, rather than just referring to pseudo-code.
- In line 124 it is claimed that the objective considered in this paper is more general than those studied in related works. However, it seems to me that the setting is simply different/incomparable rather than more general. For example, the analysis of [1] considers any differentiable (locally smooth) loss.
[1] Razin, Noam, Asaf Maman, and Nadav Cohen. "Implicit regularization in hierarchical tensor factorization and deep convolutional neural networks." International Conference on Machine Learning. PMLR, 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the numerical experiments, is there a reason for comparing success rates as opposed to normalized reconstruction errors? How was the threshold of 0.05 chosen?
2. Why not parameterize the tensor as a symmetric rank one matrix from the get go, as opposed to optimizing over the full tensor. The former is more efficient in terms of the number of parameters and guarantees that the trajectory stays rank one. Is it a matter of optimization, i.e. that a symmetric rank one parameterization may converge to spurious local minima/non-strict saddles?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As detailed in the review above, I believe there is room for improvement in terms of explicitly addressing the limitations and gaps of the current theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed comments and keen suggestions.
(1) Due to space constraints, some explanations were placed in the appendix. We apologize for any confusion and will address each concern:
- Points 1 and 3: Theorem 1 outlines a relationship between ratio $\kappa$ and iteration $t$. $\kappa$ has no upper limit and can be achieved with enough steps, given a small initialization scale. To avoid using an arbitrarily small $\epsilon$, we presented Corollary 1 in Appendix C.2, which establishes the order of magnitude for $\epsilon$ and the number of iterations needed for random initialization. For instance, to achieve the value of $\kappa$ required in Theorem 2, we need $t$ to be on the magnitude of $\ln(||M^*||^2_F) \ln \left( \frac{1+ \eta \sigma^l_1(U)}{1+ \eta \sigma^l_2(U)}\right)^{-1}$, which has a value around 100 in the specific problem presented in Section 6.1.
- Point 2: Proposition 2 shows that while the tensor remains approximately rank-1, it can reach critical points of the lifted problem, with the lifted critical point comprising a dominant term and some noise.
- The rank-1 escape direction differs from $\hat{x}$ but can be computed deterministically. As we only require the tensor to be approximately rank-1, we can set the learning rate to the same order of magnitude as the desired $\kappa$.
(2) We used perturbed matrix completion instead of standard matrix completion because its RIP constant can be easily calculated and better aligns with our theory. In both scenarios, the lifted formulation showed clear superiority even with $l=3$, but its high computational demands limit scaling with our current computational budget. Still, we explored various problem sizes and found the success of the lifted framework to be independent of the scale itself. We have started working on how to use parallel computing and high-performance machines to solve large-scale lifted problems. Since this is a new framework, developing fast, tailored algorithms is beyond simply modifying an off-the-shelf solver. We hope to complete the simulations with larger sizes in a month and add them to the next version of the paper.
(3) We do not directly showcase the rank of the tensor along the optimization trajectory because tensor rank, unlike matrix rank, is not easily computable, being NP-Hard to approximate even when $l=3$ [1]. Furthermore, for the same tensor, its rank, symmetric rank, and rank based on v-Eigenvalue can all be different [2], so there is no standard metric. This is why we instead use success rate as a surrogate measure, for which a higher success rate means higher probability of escaping from saddle points, which is a hallmark of approximate rank-1 tensors.
However, we agree with the reviewer's comment that this is an important aspect of our paper. Thus, we used the algorithm (S-HOPM) in [3] to extract the dominant rank-1 component of tensors along the trajectory (which we call $w_1$), and then subtract this main component from the tensor, and repeat the extraction procedure for $w-w_1$ to get $w_2$. We then calculate the ratio of $\frac{||w_1||_2}{||w_2||_2}$. Using the problem defined in Section 6.1 with n=8, we get the following results for the ratio along the optimization trajectory for gradient descent:
| iteration | 20 | 40 | 60 | 80 | 100 | 120 | 140 | 160 | 180 |
| -------- | ------- | ------- | ------- | ------- | ------- |------- | ------- | ------- | ------- |
| $\epsilon=10^{-5}$ | 1.16 | 0.95 | 0.82 | 0.05 | 0.03 | 0.018 | 0.026 | 0.028 | 0.013 |
| $\epsilon=10^{-3}$ | 0.13 | 0.43 |0.44 | 0.031 |0.036| 0.0008 |0.034 | 0.028 |0.022|
| $\epsilon=0.1$ |0.14 | 0.02 | 0.05 | 0.034| 0.031 |0.026 | 0.022 | 0.034| 0.037 |
The table shows that the tensor gradually becomes more "rank-1", complying with Theorem 1. This phenomenon is not limited to small initialization scales, which is promising. However, it's essential to note that the computed ratio is not a precise reflection of $\kappa$, as rank-1 approximation may not be globally optimal for arbitrary tensors [3]. Nevertheless, this ratio provides meaningful insights into the training dynamics, further supporting Theorem 1.
(4) Regarding the minor comments/questions:
- We'll use different notation for tensor inner-product.
- Escape direction refers to a direction where the Hessian has a negative eigenvalue.
- $t_T$ denotes the maximum allowed number of iterations, with $T$ being finite.
- customGD differs from vanilla GD as it finds descent directions deterministically at a critical point.
- Our techniques can analyze a wider range of tensor problems than [4], but it's not a generalization as the settings differ. We'll clarify this.
- The threshold of 0.05 is chosen manually in order to avoid huge reconstruction errors to corrupt our data, especially in the sense that we do not distinguish between failed cases with reconstruction errors of 0.5 and 10.
- If we parametrize the problem as a symmetric rank-1 tensor from the beginning, then no escape directions can be established using the techniques in Theorem 4, so it defeats the purpose of using a lifted framework.
[1] Swernofsky, Joseph. "Tensor rank is hard to approximate." Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018.
[2] Comon, Pierre, et al. "Symmetric tensors and symmetric tensor rank." SIAM Journal on Matrix Analysis and Applications 30.3 (2008): 1254-1279
[3] E. Kofidis and P. A. Regalia, “On the best rank-1 approximation of higher-order supersymmetric tensors,” SIAM Journal on Matrix Analysis and Applications, vol. 23, no. 3, pp. 863–884, 2002.
[4] Razin, Noam, Asaf Maman, and Nadav Cohen. "Implicit regularization in hierarchical tensor factorization and deep convolutional neural networks." International Conference on Machine Learning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, I have read it and the other reviews carefully. It has addressed my concerns, a few notes though:
- Note that with its current phrasing, Theorem 1 supports only the existence of a single $\kappa$, whose value is not stated, and a time iteration for which the claim holds. From your response I understand that the intention is that for any $\kappa$ there exists a time iteration $t(\kappa, l)$ such that the claim holds. Thus, I strongly recommend adapting the theorem statement accordingly (the result as currently stated is substantially weaker).
- I appreciate the additional experiments, and acknowledge the evidence for the tensor remaining approximately rank one throughout optimization.
- As mentioned in my original review, since success rate is too crude of a measure, e.g. it is possible that the difference in reconstruction error is not large while the difference in success rate is, I still believe comparing (normalized) reconstruction errors will allow for a better comparison of the performance of the lifted and unlifted techniques.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for highlighting areas of improvement in our paper. We genuinely appreciate the time and effort you've invested in reviewing our work.
(1) To address your concerns, we will refine Theorem 1 to clarify that $\kappa$ can be set arbitrarily, and a sufficiently large iteration $t(\kappa,l)$ will ensure the tensor along the optimization trajectory remains $\kappa$-rank-1 from $t(\kappa,l)$ onwards. The theorem will also emphasize the provision of a lower-bound for $t(\kappa,l)$ based on problem constants. Additionally, we will elucidate the simplification of this theorem through random initialization for better clarity. Your feedback was instrumental in identifying potential areas of misinterpretation.
(2) We acknowledge the reviewer's concern that simply providing the success rate might be misleading due to potential minute differences among the actual values. To address this, we re-ran the experiment described in Table 2(a) with n=8 and obtained normalized reconstruction errors. Since we cannot post images or PDFs during this rebuttal phase, we present the results in tables:
| Trials | 1 | 2 | 3 | 4 |5 | 6 | 7 | 8 | 9 | 10 |
| -------- | ------- | ------- | ------- | ------- | ------- |------- | ------- | ------- | ------- | ------- |
| Unlifted | 2.82 | 2.00 | 0.01 | 2.00 | 2.00 | 2.00 | 2.00 | 2.00 | 2.00 | 2.00 |
| Lifted | 8.37 e-5 | 5.81 e-4 | 2.26 e-4 | 8.19 e-5 | 8.3 e-6 | 4.00 |3.97 e-4 | 8.21 e-5 |8.3 e-6| 3.5 e-5 |
The data shows a significant gap in actual numerical values, and the choice of reconstruction threshold will not significantly affect the success rate. We chose the success rate metric because, for instance, the outlier in trial 6 of the lifted experiment would skew the average reconstruction error to 0.4 with a standard deviation of 1.2. This doesn't accurately represent the phenomenon where the lifted formulation nearly solves all instances perfectly. We appreciate the reviewer's feedback and will include more distributions of reconstruction errors in the revised version of our paper, should it be accepted.
In conclusion, we hope our clarifications address your concerns. We kindly request you to re-evaluate our work in light of these explanations. Regardless of the outcome, we deeply value your insights and are grateful for your thorough review. | Summary: This paper presents a new GD algorithm that is suitable for the problem of matrix sensing. Within this algorithm a 1-rank approximation of the corresponding tensor is made. A distinctive feature of this algorithm is that some points of local minima are turned into saddle points, which improves the convergence to the global optimum.
Strengths: - A new modification of the gradient descent method was invented and implemented, which gives an advantage in certain problems
- This article shows the importance and application of concepts from tensor algebra to machine learning.
- All the algorithms are described in detail in Appendix
- All the proofs are given in Appendix, and all the necessary terms are neatly given in the text of the article and Appendix.
Weaknesses: - The article is overloaded with technical details and terms, many of which are only defined in appendix. This makes it difficult to understand the main idea of the article. Perhaps this style would be more suitable for people specializing in tensor algebra and matrix sensing problems.
- Only few numerical experiments have demonstrated a strong advantage of the method.
- Minor:
line 156 possible typo, must be $2d$ instead of $d$
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Have you tried testing the algorithm on more complex real-world problems?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The theorems mention constraints on the operator $\mathcal A$, which appears in the formulation of the matrix sensing problem. Thus, the developed method is restricted to problems that can be reduced to the matrix sensing conception and those for which certain conditions are imposed. It seems difficult to say in advance (without numerical experiments) whether the algorithm will work on this particular problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed comments and keen suggestions. The following are our responses to the review comments:
(1) We express our gratitude to the reviewer for bringing up this concern. To enhance the accessibility of our work, we intend to streamline the main text by providing concise details and devote more space to explaining the core theorems. This adjustment will facilitate comprehension for general readers less familiar with the subject matter. Furthermore, we propose to reorganize the appendix into separate sections, each self-contained, enabling readers to explore specific topics without the need to refer to other sections except when necessary for more in-depth investigation.
(2) In our experiments, we focused on two problems that we believe are important in the class of matrix sensing. Due to the computationally intensive nature of the lifted framework, we conducted experiments with problem sizes that were within the limits of our available resources. Our experiments demonstrate that small initialization with a tensor parametrization can drastically increase the probability of reaching the global optimum, and that a proper choice of algorithm is very important in the over-parametrized regime. In fact, both experiments showed meaningful improvements across a number of different settings. Nevertheless, we recognize the value in applying our technique to larger-scale problems, and we have started working on how to use parallel computing and high-performance machines to solve large-scale lifted problems. The issue is that since this is a new framework, developing fast algorithms tailored to this problem is beyond simply coding the problem and passing it to an off-the-shelf solver. We hope to complete the simulations with larger sizes in a month and add them to the next version of the paper.
(3) We appreciate the reviewer for pointing out our typo on line 156.
(4) Regarding the limitations, this lifted technique can to be applied to general matrix sensing problems and does not need anything to be checked in advance, as long as the problem can be written in the form of equation (2). Theorem 2 is only a sufficient condition to ensure the conversion of spurious solution, and the only requirement is that the unlifted problem should have a non-zero $\alpha_s$, which can be easily satisfied with full-rank sensing matrices. Alternatively, this non-zero $\alpha_s$ can also be expected to hold with high probability if the sensing matrices are sampled from i.i.d. Gaussians [1].
[1] Candes, E. J. and Plan, Y. (2011). Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. After reading your answer and discussion with other reviewers, I am going to keep the positive score 7: Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for dedicating your time to review our paper and reading our response. We appreciate the suggestions you brought to the table and will take these into consideration when preparing for the next version. We really appreciate your engagement. | Summary: Gradient descent induces implicit regularization for tensor optimization. Specifically, it has a bias towards approximately rank-1 solution in the lifted matrix sensing framework. In Theorem 1, the authors show that the ratio between the second v-eigenvalue and the first v-eigenvalue exponentially decays to 0 in terms of the number of iterations. In theorem 2, the authors show that any first-order solution of the lifted problem that is far away from the ground truth is a saddle point.
Strengths: The authors carried out complicated proof and show non-trivial results. The generalization of lifted problems from r=1 to general r is very interesting. Given that this is very theoretical, the presentation is good enough to capture the main proof strategy.
Weaknesses: There is still some confusion and I hope the authors can better explain them in the paper.
1) In Theorem 2, the authors show that if the approximation error is large, then GD can always find a direction to improve. However, it is not clear whether the lower bound is actually meaningful. For example, if $M^*$ is the identity matrix in the first $r$ dimension and $\hat{X}\hat{X}^T$ is the identity matrix in the next $r$ dimension. Then I would think $\hat{X}\hat{X}^T$ is not a good estimate for $M^*$ but the approximation error is 2r may not satisfy the lower bound $\frac{L_s}{\alpha_s}r $. Since the ratio $\frac{L_s}{\alpha_s}$ relates to the RIP condition and the authors have discussed how overparameterization can relax RIP constraints, I encourage authors to provide more discussions around the lower bound and possible connection to the RIP condition.
2) On line 262, the authors mentioned that "since increasing $l$ will decrease ratio $\lambda_2^v(w_t)/\lambda_1^v(w_t)$ provided that $\sigma_1(U)\geq 1$. I am not sure how strict this condition is and my guess is some simple tricks may make this condition always hold.
3) From Theorem 1, it looks like the rank 1 regularization always holds regardless of the value of $l$, and lifting may not be necessary. Theorem 2 requires a large and odd value of $l$ which is not clear to me why this condition is needed. Some explanation would be nice.
4) When $w_t$ is approximately rank-1, what does it implies about the rank of original estimate $\hat{X}$ for general r? It is not very clear to me.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: line 232 typo: "fist" to "first"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The main results are theoretical results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed comments and keen suggestions. The following are our responses to the review comments:
(1) The reviewer raises a nice point regarding the specific example given. It is important to emphasize that the condition in (11) represents a sufficient condition rather than a necessary one. This implies that if the distance exceeds the RHS of (11), the conversion can be ensured based on the theory. However, even when (11) does not hold, it does not preclude the possibility of conversion. To address this concern, Theorem 7 in the appendix provides a proof that, irrespective of whether (11) is met, all spurious solutions can be converted to strict saddles as long as $||M^*||_F$ remains small. We intend to clarify this distinction more explicitly in the main text should the paper be accepted.
(2) The condition $\sigma_1(U) \geq 1$ is not an important one because even if $\sigma_1(U) < 1$, it still hold that $\sigma_2^l(U) \leq \sigma_1^l(U)$, and that the RHS of (7) will be larger than 1 regardless. We thank the reviewer for pointing it out, and will delete this sentence to avoid confusion.
(3) Our work builds upon [1], where the authors demonstrated that a large and odd $l$ is necessary for the conversion from spurious solutions to strict saddles. However, their theoretical analysis was limited to the special case of rank-1 matrices, a property that is difficult to maintain. In our study, we show that employing the gradient descent algorithm with a small initialization implicitly preserves this property without imposing any rank constraints. This finding is established in Theorem 1. In summary, a large $l$ is essential for the optimization landscape to possess favorable properties contingent on being rank-1, and Theorem 1 establishes that a simple algorithmic choice ensures the tensor's approximate rank-1 nature, irrespective of $l$. Furthermore, Theorem 2 demonstrates that this "approximate" rank-1 property is benign when compared to being truly rank-1.
(4) Then tensor $w_t$ being approximate rank-1 has no implication on the rank of the original estimate $\hat X$, and the maximum rank of $\hat X$ is assumed to be known a priori in order for (5) to be established correctly.
[1] Z. Ma, I. Molybog, J. Lavaei, and S. Sojoudi, “Over-parametrization via lifting for low-rank matrix sensing: Conversion of spurious solutions to strict saddle points,” in International Conference on Machine Learning, PMLR, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will remain my score.
Yet for point (2), if $\sigma_1(U)<1$, then increasing $l$ may decrease convergence speed and from the later theorem, large $l$ seems to be required. I think it would be good if there is any simple solution that can address the case when $\sigma_1(U)<1$.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's diligent review and the insightful points raised. Our perspective on $l$ is that it should be chosen such that the conversion from spurious solutions to strict saddle points is ensured. Once this is established, we can leverage this information, along with other constants, to determine the required number of iterations for tensors along the optimization trajectory to approximate rank-1, as indicated by relations (7) or (9).
If, as the reviewer pointed out, $\sigma_1(U)$ is notably small, we can adjust the stepsize $\eta$ to be larger initially. This adjustment allows us to achieve a satisfactory $\kappa$ without necessitating an excessive number of iterations. Furthermore, our experiments have demonstrated that usually a choice of $l=3$ is already good enough, suggesting that there's typically no need for an exceedingly large $l$, which further addresses potential concerns when $\sigma_1(U) < 1$.
We recognize the importance of this observation and will ensure its inclusion in the revised version of the paper for clarity. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
ASPEN: Breaking Operator Barriers for Efficient Parallelization of Deep Neural Networks | Accept (poster) | Summary: The authors proposed ASPEN, an opportunistic parallelism method that breaks the synchronization barriers of each operator presented in a DNN graph so that parallel compute resources can tranverse and execute multiple data-paths independently with much less synchronization overhead in a **shared memory** system. This is achieved by:
* Split the operators/tensors to multiple tiles thus multiple data-paths along the dataflow graph
* Implements **decentralized** executors to tranverse these data-paths indepently with only in-demand synchronization with a **centralized** information pool that captures the dependencies of the split graph
Strengths: * The authors explored opportunisitic parallelism, a less studied/adopted parallelsm in modern machine learning frameworks/infrastructures, which is particularly helpful when three scenarios are present, namely:
1. The hardware system has significant threading (barrier overheads) like CPUs
2. The ML models enbrace multiple data-paths (like residual connecitons or multihead attention)
3. The input data has limited fully independent dimension like batch dim during inference
* The implementation of ASPEN is intuitive/efficient with correctness/completeness proves
* The evaluation of ASPEN on Resnet and Bert demonstrated its effificacy (significant speedup) compared to popular frameworks implemented with operator barriers.
* ASPEN could be helpful on inference environment with heterogeneous/edge devices
Weaknesses: * While the authors explored opportunisitic parallelism and demonstrated its efficacy on a single CPU with multi-cores, its application is limited as the three scenarios mentioned in Strengths hardly coexist in modern ML workloads, e.g,.
1. GPUs don't have such high threading overheads, thus has much higher intra-operator parallel efficiency. GPU systems are usually bottlenecked by device memory bandwidth or interconneciton bandwidth, which can't be solved by opportunisitic parallelism.
2. When the app has abundant input data (e.g., large batchsize) like in training or batched inference, ASPEN is not likely to provide performance margin over other frameworks (line 328)
* ASPEN is only valid for a shared memory system, e.g, multi-threading on a single CPU, when scaled to multiple CPUs or multiple hosts (e.g., multi-processing), the IPC or interconnections could introduce significant memory overhead
* Some of the implementations are not very clear, e.g,.
1. in line 176, the authors metioned the graph is split/merged column-wise and row-wise, but on line 178 it said "It merges the tiles column-wise" and nothing is mentioned for row-wise, so is in Figure 3.
2. The explanation of Ready Pool in section 3.3 is not very intuitive. On a high level I understand it tries to prioritize ready nodes with shallower depth and data reuse, but I am not sure why it has to be a matrix with hashing to column indices, e.g., why is a row-wise priority queue not enough
3. The authors didn't explain how memory allocation/deallocation is handled, e.g., is splitting operatos results in memory fragmentation? When is the time to garbage collect and who handles it, etc.
* Evaluations are not comprehensive or a bit contradictory to the claims:
1. ASPEN should provide more values when the input size is small (line 109/328) compared to other frameworks, however Figure 6 shows that ASPEN demonstrates higher speedup when input/batchsize is larger, and authors attributed to larger inputs faciliate an increased number of isolated computation paths across operators, however, other frameworks should also benefit from it as it amortizes barrier overhead (line 328)
2. Related to 1, therefore an ablation study on how batch/sentence/image size affects ASPEN should be provided to understand where ASPEN really outperforms other platforms. Addtionally since the authors mentioned models with more execution paths are likely to benefit (line 290), it is worth breaking down ASPEN formance on MHA and MLP(FFN) of a transformer layer to see which provides more performance gain. Other than MHA, artificial graph with various execution pass may also serve as a good ablation study.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In addition to the questions mentioned in Weakness, here are a few more:
1. As author mentioned, column-wise (aka on batch-size) graph partitions typically expose most parallel paths (line 173) that doesn't require synchronization with the pool. But do you also partition row-wise/image-wise? Since these partitions may incur reduce-scatter/all-gather operations (even batch-partitioned Resnet have similar issue due batch-norm), are they handled by ASPEN? Even if it is a shared memory system, would it introduce high overhead? And if so, how do ASPEN decide which dimension to tile and to what degree?
2. in line 281, the authors mentioned GPT2 is only evaluated for prompt/question encoding which usually has large sequence length. However the majority of a generative inference task is a per token infernece where Batchsize = SequenceSize = 1, how is ASPEN's performance on this? have you considered head tiling to speed it up?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations which are plausible, however there are still a few more to address:
1. Efficacy on GPUs. The authors mentioned similar ideas can be implemented with CUDA Streams and Events, however as mentioned earlier, GPUs don't have such high threading overhead, so its efficacy is quesitonable compared to un-tiled but well vectorized GPU kernels.
2. Though the implementaiton of DSE and Ready Pool looks robust, due to it is still a runtime/software managed data dependency controller, its efficiency can't be compared to a native dataaflow architectures with hardware managed data dependency, e.g, IPU/RDU/WSEs from GraphCore/SambaNova/Cerebras.
3. As authors suggested, ASPEN can improve system utilization when shared with users, however this comes at the cost of higher per user latency even though overall system utilization can be higher.
4. As is mentioned in weakness, it is limited to a shared memory system so can't be leveraged in a distributed system.
5. The authors said ASPEN is orthogonal to existing fusion and slicing techniques (line 138), however, fusion and slicing would either amortize barrier overhead or fight for slicing/splitting degrees with ASPEN, which diminishes the performance gain of ASPEN.
Nevertheless, ASPEN still shines on the exploratory works it has conducted for opportunisitic parallellism in ML, despite all the limitations above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and feedback! In this response, we will address and clarify your individual concerns one by one.
___
> **Issue 1** Applications of ASPEN are limited.
* GPUs don't have high threading overheads and are bottlenecked by memory bandwidth
Overcoming the threading overheads is one of the main contributions of opportunistic parallelism, but its dynamic and asynchronous nature provides additional benefits. We explain the detailed benefits of ASPEN on GPU in the general response.
* ASPEN is not likely to be useful with abundant inputs.
As shown in the evaluations, this is clearly not true. We explain this in detail in Issue 4.
> **Issue 2** ASPEN does not scale to multiple hosts.
We are currently extending ASPEN to a multi-host system as a separate work. We first separate the ASPEN graph into subgraphs that belong to different hosts. Once a DSE computes a node, it checks if its child belongs to a different host, and sends its data to the child’s host if so. The receiving host marks the node as complete and updates its child nodes. A child node enters the ready pool only when it belongs to the current host. ASPEN also brings benefits to such multi-host scenarios as the host computations and data transfers are effortlessly interleaved with one another, allowing for high utilization of both the hosts and the network.
This is clearly beyond the scope of proposing the concept and implementation of opportunistic parallelism in this paper, but to provide an insight into the potential of ASPEN, we will include a discussion on the multi-host extension.
> **Issue 3** Some implementations are unclear.
* Row-wise split/merge is missing on line 178 and Figure 3.
Row-wise split/merge happens in stage (d) of the given example. We will make this explicit.
* Why is a row-wise priority queue not enough?
The hashing and priority queue is combined to facilitate fast access times. By using hashing and priority queues, we can place shallower/deeper nodes in higher/lower priority queues in constant time and retrieve them in constant time. If we used sorted structures such as heaps, the access times would have scaled to the number of nodes, which would possibly be a bottleneck in the system.
* How is memory handled?
Nodes of ASPEN graphs act only as instructions to compute the given tile in a tensor. As such, it only holds pointers to its respective tile location. These tile locations are still a part of the original input/output tensors, as in the conventional operator-based approach. As such, no fragmentation exists, and memory allocation works in the same way in existing solutions. While we currently do not have runtime memory management, its implementation is trivial. Tensors are allocated when one of its tiles becomes ready and is deallocated when all dependent tiles of the tensor are computed. The allocation is handled by the DSE which updated the tensor’s first readied tile, and deallocation by the DSE which computed the last dependent tile.
We will include the above explanations in our final manuscript. We also plan to fully release the ASPEN source code, so any details of its inner workings will be transparent.
> **Issue 4** ASPEN is faster with a larger input size, which is not comprehensive to the claims. Ablation studies are needed.
Increased batch sizes benefit ASPEN execution as it enables much greater data isolation between resources. In a traditional two-layered operator-based computation, it cannot be guaranteed that dependent computation tiles will be scheduled to the same parallel resource, as each operator computations are separate function calls. This requires scatter-gather memory operations between the parallel resources, which causes increased memory overhead and loss of computation utilization, whose impact increases as the number of parallel resource scales.
Depth-first computation and the Ready Pool allow dependent tiles to be scheduled to the same resource as much as possible, achieving effects like that of operator fusion. This increases data reuse and reduces memory traffic between resources. With large enough input sizes, it becomes possible that each resource never has to share its outputs with other resources, as each DSE is automatically allocated paths in different batch indexes, greatly reducing the memory overhead. Also, even when there are data shared between resources, only a few resources are involved simultaneously, which relieves the pressure on the memory system.
In our final manuscript, we will add a clear and explicit explanation of what isolated computation paths from large input sizes mean.
We also agree that ablation studies such as varying input sizes, and sub- or artificial-graph level experiments can help understand the speedup of ASPEN. However, contributions of ASPEN such as increased parallelism over operator boundaries, or asynchronous interleaving of scheduling and computation are most prominent in an end-to-end execution and are not easily visible in microbenchmarks. Nevertheless, we plan to add the mentioned evaluations.
> **Issue 5** Partitioning row/image-wise would incur high overhead.
As explained in Issue 3, tiles are referenced as pointers, and the tensors are not actually partitioned in memory. Therefore, scatter/gather or other memory operations that are not present in operator-based computations are unnecessary.
> **Issue 6** How is ASPEN’s performance on per-token inference?
As per-token transformer inference is similar to a single chain of matrix-vector multiplications, there is limited room for opportunistic parallelism, and thus the performance is quite limited. However, additional optimizations such as head tiling or iteration-level scheduling proposed by Orca (Yu et al., OSDI 2022), can be integrated for increased performance. We believe that these techniques would bring novel and intriguing prospects when combined with fine-grained dynamic scheduling and execution of ASPEN.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed explanation. I have a couple further questions:
1. Issue 1. I am convinced that same ideas can be applied to GPUs through tiled kernels/CUDA event apis, though my concern is when GPU is already compute (large batch-size/input, pretraining) or memory bound (generative inference) rather than synchronization/kernel-launch bound, how much value can ASPEN adds? This may not be a fair question since ASPEN is an exploratory work on a new parallelization regime, but its benefit can be underestimated when the industry is dominated by LLM workloads.
2. Issue 5. Now I got tiles are just pointer offsets, which makes sense in terms of performance and memory management, though I am still unclear if you would partition along a reduction/accumulation dim (which would introduce an all-reduce in distributed system), and if so how do you evaluate the overhead of the induced accumulative operations and how do you decide which dimension to partition.
Addtitionally I believe Limitation bullet points 2 and 5 are not addressed.
Regardless, thanks again for your contribution and I am looking forward to your final manuscript. | Summary: When we run a deep neraul network, a sequence of operators are executed. The existing deep learning framework/compiler would wait for the completion of the prior operator (A) before launching the subsequent one (B). The authors of this paper observe that some computation in operator B only depends on part of the computation of operator A. Thus, the authors propose to decompose the operators into more fine-granularity and utilize the fine-grained dependency to expose more parallelism (they called it opportunitic parallelism). They implemented this idea and experiments show that it achieves up to 3.2x to 4.3x speedup compared with prior work TorchScript and TVM, respectively.
Strengths: 1. The authors observed the unnecessary barrior between operators and observe a new kind of parallelism.
2. The authors implemented a prototype system (ASPEN) and experiments show that the new parallelism is benefitial to CPU inference on a multi-core CPU.
Weaknesses: 1. More case study is needed to justify where the speedup comes from.
From Figure 6, we can see that ASPEN can achieve 4x (GPT-2 S1024 B1) and 2x (ResNet50 B128) speedup. I am quite interested in where the speedup come from. Usually, a new kind of paralleism (in this case, opportunistic parallelism) can have a good speedup when the existing parallelism is not enough to saturate the device. For example, parallelizing two operators would have a good speedup if each operator is not large enough to fully-utlize the device. However, the operators in the two models with large sequence length or batch size should be already large enough to highly utlize the device (rely on intra-operator parallelism), then how can the opportunistic parallelism gain more speedup? Does other factors contribute to the speedup? For example, C++ implemented runtime, statically allocated memory (instead of dynamic allocated as in PyTorch), more efficient kernels for other operators like batch norm, group norm, and softmax, better micro kernels for gemm. Thus, I would suggest to add more case study to decompose the speedup. For example, do some operator-level and sub-graph-level experiments, use profilers to analyze the CPU utilization, use ASPEN's microkernels but do not use opportunitic parallelism (e.g., keep the operator barrier) to get the performance change.
BTW, I have tried the ResNet50 example and observed good speedup on my workstation.
2. More dtails for ASPEN implementation and baselines are needed
For the TVM baseline, what scheduler have you used: AutoTVM or AutoScheduler (Ansor)? What tuning configuration did you choose? For ASPEN, did you write all kernels by yourself, or based on any existing BLAS library?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Could you please provide more case study to get the contributing factors of speedup?
2. Could you elaborate more details of the ASPEN implementation?
[Minor]
It's better to also cite and compare with Graphi[1], Nimble [2] and IOS [3] in Section 2 when discuss inter-operator parallelization.
- [1] Graphi: Scheduling Computation Graphs of Deep Learning Models on Manycore CPUs
- [2] Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning
- [3] IOS: Inter-Operator Scheduler for CNN Acceleration
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. Currently, only validated the effectiveness on CPU.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and feedback! In this response, we will address your individual concerns one by one in addition to the general response. We hope this clarifies your concerns about our work.
___
> **Issue 1** More case study is needed to justify where the speedup comes from.
We agree that ablations studies on the operator or sub-graph level would be helpful to further understand the speedup of ASPEN. However, it must be noted that contributions of ASPEN such as increased parallelism over operator boundaries, asynchronous interleaving of scheduling and computation, or data reuse from depth-first execution are most prominent in a full end-to-end execution and are not easily visible in microbenchmarks with a limited number of operators or computations. Nevertheless, we plan to add kernel-level performance evaluations and memory and CPU utilization profiling results to our final manuscript, to more clearly show where the benefits of ASPEN come from.
For the increased speedup in large input sizes, we attribute the cause of the speedup to the isolation of each computation resource in ASPEN. In a traditional two-layered operator-based computation, it cannot be guaranteed that dependent computation tiles will be scheduled to the same parallel resource during subsequent operators, as operator calls are implemented as separate function calls. As such, scatter-gather memory movements between resources are required to share their computation data, which causes increased memory overhead and loss of computation utilization, whose impact increases as the number of parallel resource scales.
Depth-first computation algorithm of DSEs and the two-dimensional Ready Pool allows dependent computation tiles to be scheduled to the same computation resource as much as possible, achieving effects like that of operator fusion, but in an implicit way. This reduces memory traffic between resources and allows data to be reused as much as possible. With large enough input sizes, it becomes possible that each resource never has to share its computation outputs with other resources, as each DSE is automatically allocated computation paths with different batch indexes, greatly reducing the memory overhead of the computation. Also, it should be noted that even when there are data shared between resources, only a few resources are involved simultaneously, which relieves the pressure on the memory system.
In our final manuscript, we will add a clear and explicit explanation of what isolated computation paths from large input sizes mean, and how it improves the execution of ASPEN in terms of scalability, data movement, and resource utilization.
> **Issue 2** More details for ASPEN implementation and baselines are needed.
We will include more implementation details such as memory layout and access patterns of ASPEN, as well as details on computation kernel implementations and evaluation environments in our final manuscript. We also plan to fully release the ASPEN source code, so any details of ASPEN and its inner workings will be very transparent.
For the TVM baseline, we used AutoTVM following the instructions provided on the official TVM documents. We compiled the TVM programs in Python and exported them to C++ based API of TVM for fair comparison. We wrote all our kernels by ourselves. We used simple for-loops for our kernels except for the matrix multiplication kernels of tile sizes 1x8 to 12x8, for which we used AVX2 intrinsic for vectorization. These matrix multiplication tiles were also used inside other kernels, such as convolutions or linear layers.
> **Issue 3** It is better to also cite Graphi, Nimble, and IOS in Section 2.
Thank you for mentioning this! We will include them in our final manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response, looking for your final manuscript and source code of ASPEN. | Summary: DNN is composed of multiple computational blocks, each using different tensor operators. However, due to the nature of the computational graph, there are internal dependencies among the blocks and operators. Consequently, the synchronization barrier results in considerable overhead for modern high-parallelism hardware. This paper introduces ASPEN, which aims to mitigate the synchronization barriers between operators and to uncover parallel computation opportunities across operators. ASPEN consists of two stages of optimizations: offline and runtime. In the offline optimization stage, an Automated Parallelism Unit (APU) is employed to convert operators into a tile-based fine-grained data-flow graph. Then, during runtime, the workloads are distributed across the hardware. The results demonstrate the efficient performance of ASPEN compared to other frameworks.
Strengths: - This paper addresses a well-motivated problem and is easy to follow.
- The proposed techniques are generally making sense and effective.
- It includes a performance comparison of different model structures and significantly outperforms other frameworks for extreme deep model structures (such as GPT-2).
Weaknesses: - The optimizations proposed in the paper appear heuristic and do not guarantee optimal performance.
- The evaluation is not thorough, lacking some details on memory consumption and comparisons with other state-of-the-art frameworks, such as TASO, ONNXRuntime, and oneDNN.
- It is unclear how to remove the barrier in the APU optimization. Does this involve a trade-off in memory consumption by loading multiple weights into parallel units?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Algorithm 1 appears to be ad-hoc and heuristic, as it relies on the status of node execution. However, this heuristic cannot guarantee optimality and is not always effective. Have the authors considered other algorithms?
- Have the results from TVM been tuned?
- How does the memory consumption compare to other baselines? Do the optimizations incur any additional overhead?
- While it is good for this paper to compare with different frameworks, including DNN primitive framework and compiler-based framework, TensorFlow is a general training framework, whereas ONNXRuntime and oneDNN generally perform better in the inference domain. It would be better if the authors could provide a performance comparison for these frameworks.
- Is the improvement in speed mainly due to the Convolution and GEMM kernels?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and feedback! In this response, we will address your individual concerns in addition to the general response. We hope this clarifies any concerns about our work.
---
> **Issue 1** The optimizations/algorithm appear to be ad-hoc and heuristic.
Our work aims to present a dynamic DNN scheduling and execution approach that can increase parallel resource utilization during runtime using tile-based opportunistic parallelism. Therefore, the main algorithm (Algorithm 1) is not designed to optimize the execution of certain DNNs or environments. Instead, it focuses on enabling opportunistic parallelism and dynamically increasing resource utilization by continuously scheduling any newly available computations during runtime. This strategy permits each computation resource to accommodate as many computations as the DNN offers throughout the runtime. By keeping every resource maximally utilized, the algorithm can collectively achieve high throughput, regardless of the DNN or hardware used.
We agree that maximal utilization of the resources does not guarantee optimality, and there exists room for improvement in our algorithm. For instance, we find that some tiles are more important than others, and prioritizing them can accelerate the execution, due to the dependency patterns in the DNN formed from pooling and strided layers. In this work, however, in order to focus on providing a general solution that first enables the novel approach of tile-based opportunistic parallelism, we left optimizations for future work. Nonetheless, we will include more details on the optimality of our algorithm, and other dynamic scheduling approaches that can lead to optimality in executions.
> **Issue 2** Removal of the barrier in the APU optimization is unclear. Does this involve loading multiple weights?
The key to removing synchronization barriers is in both the tile-based decomposition of DNNs in the APU and the dynamic dependency tracking in the runtime. Partitioning each operator and creating tile-based DNN which allows each tile to be managed as a graph node. Dependency is atomically updated for each node in runtime, allowing resources to check dependency states without synchronization. Through this, ASPEN removes the need for synchronization barriers and allows for much higher parallel resource utilization.
This also means that there is loading of multiple weights, as the computation remains the same. The details of ASPEN memory usage are further explained in the next response. In our final manuscript, we will make it clear that our contribution is not an optimization or trade-off on the existing algorithms, but a new approach to scheduling and parallelization of DNN computation.
> **Issue 3** How does memory consumption compare? Do the optimizations incur any additional overhead?
We must first make it clear that ASPEN improves the parallel scheduling of DNN computations and does not alter the computation process of the DNN. Additional memory required by ASPEN is for creating tile-based graphs, and its nodes act only as instructions to compute the given tile location in a tensor. As such, it only holds references (pointers) to its respective tile locations and the tile locations referenced by the nodes are still a part of the original input/output tensors, as in the conventional operator-based approach.
The memory overhead of the tile-based graph used by ASPEN is minimal, as each node requires only ~100 bytes of memory. For example, in the ResNet-50 batch 1 case, ASPEN graph requires 836 kilobytes of additional memory, which is minuscule compared to the weights of ResNet-50, which is ~100 megabytes.
Our submitted supplementary material of ASPEN may use a relatively larger amount of memory, due to the more relaxed memory allocations we used in our implementation. This is not reflective of the memory use required by the ASPEN algorithm itself. We will update our code to manage its memory as tightly as possible.
In our final manuscript, we will include explanations of how data are stored and managed in ASPEN, along with a detailed decomposition of ASPEN memory usage over different devices and DNNs. We also plan to fully release the ASPEN source code, so any details of ASPEN and its inner workings would become transparent.
> **Issue 4** Are results from TVM tuned? It would be better if there are comparisons against more frameworks.
We use AutoTVM, following the instructions provided in the official TVM documentation. We compile our model in Python and export the model to C++ based TVM API for a fair comparison. We tried to include as many comparison baselines as possible, available in the C/C++ domain. However, some inference solutions include optimizations such as caching, approximations, or data modifications, which are largely orthogonal to the parallelism and scheduling contributions of ASPEN and make them unfair to compare to.
However, we plan to include the suggested frameworks such as ONNXRuntime and oneDNN in our final manuscript, as these frameworks provide options to adjust the level of optimization and allow for a fair comparison.
> **Issue 5** Is the speedup mainly from the computation kernels?
Our improvement comes from the higher device utilization from the removal of synchronization barriers, increased parallelism, and data reuse from depth-first node execution, which allows better parallel scaling and increased per-resource throughput. As explained in the general response, our tile-based kernels are less performant than those of existing operator-based solutions.
We will include a more detailed decomposition of the performance contributions of ASPEN, including the performance of ASPEN kernels. However, it must be noted that contributions of ASPEN such as increased parallelism over operator boundaries, asynchronous interleaving of scheduling and computation, or data reuse from depth-first execution are most prominent in full end-to-end execution.
---
Rebuttal Comment 1.1:
Comment: We have made a slight typo in Issue 2, in the first sentence of the second paragraph.
> This also means that there is loading of multiple weights, as the computation remains the same.
should be corrected to
> This also means that there is **no** loading of multiple weights, as the computation remains the same.
Sorry if this has caused any confusion.
---
Rebuttal Comment 1.2:
Comment: Thank you for your further explanation, which has partially resolved my concerns. The author also promised to provide further evaluation and clarification in the final manuscript. I will raise my rating accordingly. | Summary: In this paper, authors proposed ASPEN, a parallel computation solution for DNNs, which utilizes a new class of parallelism for DNNs, namely opportunistic parallelism, to dynamically locate and execute any parallel computation opportunities during runtime. More specifically, the authors have presented three main key points in ASPEN:
(1) a tile-based graph partitioning unit that transforms operator-based DNN dataflow graphs into tile-based dataflow graphs unlocking rich parallel computation opportunities,(2) a distributed scheduling algorithm that enables each resource to asynchronously track and compute a distinct computation path without encountering any data hazards or race conditions, and (3) a highly concurrent data structure that facilitates asynchronous information exchange among parallel resources.
The evaluation of ASPEN on various CNN and transformer-based model inferences on CPU have shown performance gains.
Strengths: 1. The paper is clear and organized. The authors presented ASPEN in three key components: the Automated Parallelism Unit (APU), the Distributed Scheduling Engine (DSE), and the Ready Pool. each component is well introduced and analyzed. The overall workflow of ASPEN is clear.
2. ASPEN has shown strong performance on various DNN architectures, achieving speedups up to 6.2× against TensorFlow and 4.3× against TVM (GPT-2 S1024 B1).
Weaknesses: 1. As mentioned in the paper, ASPEN is targeted at CPUs. The motivation is not quite strong if only applicable to CPUs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The hardware platform is limited to CPU, which is the biggest limitation for the proposed ASPEN.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and feedback! In this point-to-point response, we will address and hopefully clarify your concern with our work.
---
> **Issue 1** The hardware platform is limited to CPUs.
As explained in detail in the general response, ASPEN is not necessarily limited to CPUs. ASPEN provides a general concept and solution to enabling a novel DNN scheduling and execution system that allows higher parallelism resource utilization, not confined to certain hardware architecture. However, due to its novelty, we were unable to find a suitable computation backend for GPUs. Consequently, we were only able to provide evaluation results based on CPUs. Fortunately, we find that tile-based kernels are becoming more widely available, and therefore we expect the GPU backend for ASPEN to be soon available.
We will make this clearer in our final manuscript and explain in more detail how ASPEN is applicable to other hardware architecture, as explained in the general response.
---
Thank you again for your valuable feedback and suggestions on our work! | Rebuttal 1:
Rebuttal: General Response
---
Thank you for taking your time to review our paper! In this response, we will explain the essential value of ASPEN and the reasons behind our selection of evaluations. Then, we will clarify individual questions and concerns one by one.
As an exploratory work on fine-grained dynamic parallelism of DNNs, ASPEN aims to provide insights and solutions to the novel computation approach of applying tile-based opportunistic parallelism to DNNs. As explained in the paper, available DNN computations are scheduled in two hierarchical layers of inter-operator and intra-operator computations using a dataflow graph of operators. However, this two-layer approach limits the available parallelism within each operator in the form of synchronization barriers. This limitation in parallelism has been identified as a major bottleneck in resource utilization by previous studies such as Rammer (Ma et al., OSDI 2020).
As described in Section 2, many solutions have been proposed to alleviate this issue. However, they still rely on the use of operators, which inevitably limits parallelism to some extent. On the other hand, ASPEN opts to take a completely new approach by removing the use of operators entirely and dynamically scheduling any parallel tiles that are available for computation. This effectively combines the separate inter- and intra-operator computation spaces into a single unified space of tile-based dataflow graph. This approach has yet been explored in the literature and enables the complete removal of the limited parallelism caused by the operator-based approach. In this framework, any computation tile can be assigned to a computation resource as soon as it becomes available, independent of which operator it belongs to, leveraging the concept of opportunistic parallelism.
Unfortunately, due to the uniqueness of our work, only a limited array of existing software infrastructure is available, curtailing its potential for the incorporation of tile-based computation kernels. For this reason, we wrote our tile-based kernels in C and left the optimizations to GCC in our evaluations on CPUs. While these kernel implementations may not be as finely optimized as the existing high-performance computation libraries, they still allow us to demonstrate the capability of ASPEN’s novel parallelism and scheduling approach in achieving substantial parallel resource utilization on CPUs. However, it is worth noting that such kernel generation is not applicable to less programmable hardware such as GPUs, as their code is much more hardware dependent with proprietary software stacks, and their kernel compilers are not versatile enough to be applicable to tile-based kernels. As a result, our demonstration of the efficacy of ASPEN is limited to its maximal available scope (as of now).
Nonetheless, we believe that with the appropriate computation kernels, the novel approach proposed by ASPEN will greatly benefit DNN executions on GPUs, for the following reasons. One of the limiting factors to a high GPU utilization is the host-device scheduling and communication overhead in both data movement and kernel launches, as reported in studies such as Rammer and Nimble (Kwon et al., NeurIPS 2020). The dynamic and asynchronous nature of ASPEN allows for the seamless interleaving of data movement, kernel launches, and tile executions, enabling concurrent scheduling and processing. As a result, ASPEN greatly reduces the overhead of host-device communications. This asynchronous nature also means that the computing resources are at different stages of kernel execution during computation, which leads to the distribution of memory access requests across the temporal domain. This decreases the pressure on the shared memory bus and allows better utilization of the memory system, which mitigates the limitations in memory bandwidth that DNN executions on GPUs often face, as mentioned in studies such as Alpa (Zheng et al., OSDI 2022) and Welder (Shi et al., OSDI 2023).
Furthermore, we find that tile-based GPU kernels for DNNs are becoming available soon. To be specific, we find that Welder, a memory optimization work released a few weeks ago (July 2023), constructs a tile-based dataflow graph during its offline optimization stage, which can be used as an ASPEN graph with minor modifications. Although Welder still uses conventional operator-based execution during runtime, thus keeping ASPEN’s contributions untouched, we find that its GPU kernels can be decomposed to its offline tile-based form and be modified into a computation backend suitable for ASPEN. Leveraging these kernels, an effective extension of ASPEN for (Nvidia) GPUs is feasible using CUDA Streams and CUDA Events API. A separate CUDA Stream allocated to each DSE allows kernels launched by different DSEs to run concurrently, and DSEs can asynchronously check the completion of a kernel and update its child nodes using CUDA Events. Regardless, GPU support for ASPEN would not alter the core logic and design of ASPEN in any way, and therefore the contributions and benefits of ASPEN presented in our paper remain intact.
Tile-based understanding of DNNs has gained significant traction in recent years, as it provides a holistic view of the DNN and allows for a finer-grained analysis and management of both computation and data, which enables contributions that were previously impossible with the operator-based dataflow graphs. ASPEN is the first to explore the benefits of tile-based DNNs during runtime. Unlike previous tile-based works which focused on offline analysis and optimizations such as graph compilers (Roller, Zhu et al., OSDI 2022) or memory usage (Welder), ASPEN focuses on the benefits of tile-based DNNs on parallel resource utilization during execution using opportunistic parallelism. We hope that this clarifies the core value of ASPEN as an exploratory work providing insights and algorithms for tile-based dynamic scheduling and computation of DNNs. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Inference performance is one of the key metrics driving the commercial adoption and integration of modern DNNs into user-facing applications. In the paper, the authors propose a framework, ASPEN, to improve the inference performance of DNNs by exploiting a novel strategy to extract maximal parallelism, called opportunistic parallelism, during the forward pass. Opportunistic parallelism dynamically locates and executes ready units of computational maximally based on the minimization of the number of synchronization barriers introduced by different operators. Synchronization barriers are removed based on a tile-wise decomposition of the input data and weights to produce parallel execution sequences capable of taking advantage of more hardware. Orchestration of the computational work is performed by 2 processes, the automated parallelism unit (APU), lowers the input graph from an operator-centric synchronization strategy to a tile-wise dataflow graph suitable for parallel execution with a small number of synchronization barriers. The lowered dataflow graph is then placed on the compute units by the distributed scheduling engine to assign available processors to units of work as the availability, or ready state, of the dataflow graph dictates. Based on these optimizations the authors demonstrate significant performance improvements over competing inference engine systems as the number of cores is increased.
Strengths: - The significance of the work to the community is clear as exploiting the maximal efficiency of hardware during the deployment of DNNs is of paramount importance for industrial applications.
- The lowering of the operator-oriented DNN to a dataflow representation that exposes ample parallelism is interesting and the necessity of the authors to reimplement a large number of the lower primitives to achieve this feat speaks to the novelty of the approach. This also brings into question the current strategies employed by existing libraries/frameworks to execute DNNs and whether they should be extended to allow for a finer level of granularity required to facilitate the parallelism exposed by ASPEN.
- The ASPEN dataflow graph combined with the DSE runtime system is shown to achieve substantial performance over competing implementations for a number of different DNNs and multiple CPU architectures.
Weaknesses: - The evaluation section focuses on comparisons with other CPU architectures and ignores providing any data on competing implementations optimized for GPUs.
- It is not clear the insights regarding the tensor decompositions and the resulting dataflow graph would easily generalize to GPU architectures or for the training phase. These concerns are acknowledged by the authors.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Though the results presented are CPU-based I wonder if comparing with GPU inference results, based on something like TensorRT, would be useful?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations were adequately addressed by the authors in the text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and feedback! In this response, we will address your individual concerns one by one in addition to the general response. We hope this clarifies your concerns about our work.
---
> **Issue 1** The evaluation only focuses on CPUs and does not contain GPU results or comparisons against GPU inference results.
It is also not clear if the tile-based dataflow graph would easily generalize to GPUs.
As explained in the general response, due to the unique nature of ASPEN’s approach, a suitable computation backend for GPU is not currently available. Consequently, our evaluation results are presently confined to CPUs. However, we have observed a growing availability of tile-based kernels, and therefore we expect that the performance benefits of ASPEN on GPUs will soon be corroborated, which is rather obvious according to the principles of ASPEN.
Also, the tile-based dataflow graph and its execution of ASPEN is generalizable to GPUs, as operator-based GPU kernels already use tile-based parallel executions internally. The asynchronous per-tile kernel launches of ASPEN can be efficiently achieved using CUDA Streams and CUDA Events API, by assigning different CUDA Streams to each DSE and enabling asynchronous tile scheduling and dependency updates with CUDA Events. By constantly keeping scheduled kernels in the CUDA Stream queues through opportunistic parallelism, ASPEN can fully utilize the SMs of the given GPU through the whole DNN execution. Also, ASPEN provides additional benefits such as the interleaving of host-device data movement and tile execution, and better utilization of memory bandwidth.
It is important to note that GPU implementation results would not make any change to the core logic and design of ASPEN, implying that our contributions in this work are orthogonal to the results.
> **Issue 2** Will tile-based dataflow graphs easily generalize to DNN training?
We also expect that ASPEN can be readily applicable to DNN training, given that the backward propagation in DNNs primarily involves matrix multiplication, which ASPEN already supports. Gradient computations can be partitioned into tiles and the dependencies between each gradient tiles can be encoded into a graph, to create an ASPEN graph for DNN training. This graph can be fed into the ASPEN runtime to automatically perform forward and backward propagation without any modification in the ASPEN runtime. In Section 5 (Limitations and Discussions), we explain that existing approaches may use higher batch sizes to compensate for the limited parallelism of operator-based approaches, but this does not imply that ASPEN’s efficacy reduces in training. Rather, ASPEN enables additional parallelism in situations where increasing the batch size is unfavorable or impossible, such as training with limited device memory or datasets.
In order to improve the readability and clarity, we will be updating Section 5 with more details regarding GPU execution and DNN training in our final manuscript.
---
Thank you again for your valuable feedback and suggestions on our work!
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their thorough responses to the weaknesses I outlined in my original review. Based on their response to my review and the comments made by other reviewers I am increasing my score accordingly. | null | null | null | null | null | null |
Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee | Accept (poster) | Summary: This paper considers the problem of sampling from a Gibbs distribution $p(x) \propto e^{-U(x)}$ using discretized Langevin dynamics. Since the approximation error of such methods usually depends on $\mathrm{Tr}(\nabla^2 U)$, the proposed algorithm splits $U$ into a quadratic part $g(x) = \frac m2 ||x||^2$ and a remainder $f$, and only discretizes the dynamics according to $f$.
Such a split was already considered in [Freund et al. '21]; however, the main novelty of this article is that the discretization on $f$ is performed through a two-step method with random step-sizes, instead of a simple gradient update. This scheme shaves off a factor of $\epsilon^{-1/3}$ from the required time complexity to reach a Wasserstein error of $\epsilon$.
The proof is based on an approximate contraction bound for a quantity $\Omega_n$ that bounds the desired Wasserstein distance, followed by a precise analysis and optimization of the error in the aforementioned bound.
Strengths: This paper presents a novel algorithm for Langevin simulation, that achieves state-of-the-art performance for the dependency on both the precision requirement $\epsilon$ and the ambient dimension $d$. The proposed algorithm is fairly simple (at least for quadratic $g$) and easy to implement, and the main ideas behind it are clearly explained. Overall, the paper is fairly well-written, with only a few typos here and there.
Weaknesses: Clarity/soundness: some of the proofs are very hard to parse due to the amount of simplifications made at once from one line to the next. This is especially felt in B.2, where the first two inequalities below l.447 contain around 5 sub-inequalities to check each. The specifications $\alpha \sim \rho'$ and $\beta \sim \rho$ are also used inconsistently, which makes it hard to understand with respect to what quantities each expectation is taken.
Novelty/significance: The relationship to [Shen and Lee '19] and [Freund et al. '22] should probably be expanded. From what I understand, this paper unites the randomized midpoint method of the former with the combined optimization viewpoint of the latter, but it is unclear if there are challenges other than computational to this endeavor. Namely, neither of those methods require such a complicated step-size scheme, which seems to be the main novelty of the paper, but the need for it is unclear.
Minor remarks:
- eq. (3.5): what is A?
- l.185: "squre"
- the equation below l.279 should be a scalar product.
- in (B.4), shouldn't $z_n(t)$ be $\hat x_n(\alpha) - x_n^*(t)$ instead ? This change does ripple through the proof of Lemma 5.1, so I'm not actually sure of how minor it is.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How important is the choice of $\rho$ ? I understand from Lemma 5.2 and its implications that for a given $\rho$, the choice of $\rho'$ is important to ensure these conditions, but why can't I, for example, choose a uniform prior for $\beta$?
- In section 5.2, it seems like you take $\bar w_n = x_n - x_n^* + v_n - v_n^*$ as the expectation of $w_n(s, \alpha)$; why is it the case ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ● Neither of those methods require such a complicated step-size scheme, which seems to be the main novelty of the paper, but the need for it is unclear.
We introduce the random step size to bound the error such as $\|\int_0^t x_n(s) - x_n^*(t)\mathrm{d} s\|^2$ in the discretization analysis. The $\|\int_0^t x_n(s) - x_n^*(t)\mathrm{d} s\|^2$ may be dimension-dependent as a standard bound will lead to a $dt$ additional discretization error. We seek to bound this kind of error using the descent on $\Omega_n(t)$. Under the stochastic step size, $\Omega_{n+1}$ will be $\mathbb{E} \Omega_n(t)$, and thus we can bound the dimension-dependent discretization error as $\|\int_0^t x_n(s) - x_n^*(t)\mathrm{d} s\|^2\lesssim \mathbb{E}\Omega_{n}(t)$. And then there reminds additional conditions on $\rho$ and $\rho'$ to
● The specifications $\alpha\sim\rho'$ and $\beta\sim\rho$ are also used inconsistently, which makes it hard to understand with respect to what quantities each expectation is taken.
Thanks for pointing out the clearness issue in the presentation. We frequently use Claim (B) in our proof which indicates that we can replace the expectation under $\rho’$ with the one under $\rho$ up to a constant level. Mostly, its purpose is to control the error term with the expectation of $\rho'$ using the contraction which is an expected value over $\rho$. And we use Claim (A) in Lemma D.3. We will make more explanations for our proof especially on the specification of the $\rho$ and $\rho'$ in a further revised version.
● How important is the choice of $\rho$?
We believe that $\rho$ is not restrictive to the proposed one. However, this choice of $\rho$ has to satisfy several conditions. Our choice of $\rho$ induces a simple $\rho'$ and is a natural choice. We require its expectation is of $\Theta(h)$ order which guarantees sufficient descent and the average descent can control the dimension-dependent error as in (D.18). And the corresponding $\rho’$ should satisfy claim (B) in Lemma 5.2. Uniform $\rho$ is possible given that corresponding $\rho’$ is chosen accordingly by Lemma 5.2. But this requires further validations and induces a more complicated $\rho'$. As a reminder, uniform $\rho$ does not lead to the random step size version of the randomized midpoint method. The distribution of $\rho’$ of the stochastic step size randomized midpoint method is different and it does not satisfy claim (B) in Lemma 5.2.
● in (B.4), shouldn’t $z_n(t)$ be $\hat{x}_n(\alpha) - x_n^*(t)$.
We appreciate the careful revision and pointing out this issue. This is our typo and does not influence the following analysis. This typo can be fixed only by changing the equation in line 430 and line 431. $z_n(t)$ is $x_n(t) - x_n^*(t)$. But Equation (B.4) should be $\nabla U(x_n(t)) - U(x_n^*(t)) = \int_0^1 \nabla^2 U(s\hat{x}_n(\alpha) + (1-s)x_n^*(t)) \mathrm{d} s z_n(t)$. And the following analysis remains unchanged. The last inequality follows by dividing $\nabla f(\hat{x}_n(\alpha)) - \nabla f(x_n^*(t))$ into $\nabla f(\hat{x}_n(\alpha)) - \nabla f(x_n(t))$ and $ f(\hat{x}_n(\alpha)) - \nabla f(x_n^*(t))$.
● In section 5.2, it seems like you take $\bar{w}_n= x_n - x_n^* + v_n - v_n^*$ as the expectation of $w_n(s,\alpha)$.
Thanks for pointing out the unclearness in our presentation. We do not take $\bar{w}$ as the expectation of $w_n(s,\alpha)$. There is a higher-order discretization error gap between the approximation $\bar{w}$ and $w_n(s,\alpha)$. Here we illustrate that claim (A) of 5.2 leads to a randomized mid-point discretization. One can refer to the first equation in line 447 for the rigorous proof. And we will specify the error term and our choice of $\bar{w}$ in the main text in a later refined version.
---
Rebuttal 2:
Comment: Thank you for your response. While the paper still might need a clarity pass, I am now more convinced of the technical novelty of introducing this double randomization scheme. I have therefore raised my score. | Summary: The paper suggests a novel version of the Unadjusted Langevin Algorithm with sample complexity in Wasserstein-2 distance scaling with the effective dimension of the problem (trace of the potential's Hessian) instead of the ambient space dimension in case of strongly log-concave distributions. This result completes and generalizes the results of [Shen and Lee, 2019].
Strengths: The research direction towards studying the sample complexity rates in terms of the effective dimension is interesting and potentially allows to explain the successful behaviour of the ULA-type algorithms in the high-dimensional problems.
Weaknesses: First of all, the suggested Algorithm 1 (DRUL) does not seem to be really an implementable one, since the discretization error in $x_{n+1}$ and $v_{n+1}$ is to appear in the practical implementations. It is not clear if the control of this discretization error would not yield an explicit dimension dependence in the stepsize $h$ in Theorem 4.2.
Second, the sample complexity scaling as $\varepsilon^{-2/3}$ is not completely convincing. For example, for the ridge separable potentials, which are the main motivating example towards the paper, the Hamiltonian Monte-Carlo method is known to obtain a sample complexity of order $(d/\varepsilon)^{1/4}$, see e.g. [Mangoubi et al, 2017]. Thus there is a natural question if the $\varepsilon^{-2/3}$ complexity optimal for Langevin-type algorithms? Again it seems that the particular rate can degrade after the intergral discretization in Alg. 1 taken into account.
References:
Mangoubi, O., & Smith, A. (2017). Rapid mixing of Hamiltonian Monte Carlo on strongly log-concave distributions. arXiv preprint arXiv:1708.07114.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is it possible to add any numerical findings illustrating the superiority of the doubly randomized ULA (with random step size) against the one with constant or decreasing step size? If one could trace the precise dependence upon the $\operatorname{trace}{H}$ even in the toyish setup, I would lean towards increasing my score.
2. Are there any novel technical contributions developed to prove the result of Theorem 4.2? If yes, please add the corresponding discussion to the main text.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper is theoretical and no negative societal impact is expected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ● First of all, the suggested Algorithm 1 (DRUL) does not seem to be really an implementable one, […] It is not clear if the control of this discretization error would not yield an explicit dimension dependence in the stepsize $h$ in Theorem 4.2.
We apologize for the possible misunderstanding in the presentation of ALgorithm 1. We would like to clarify that Algorithm 1 is implementable and there is no dimension dependence error or necessity of further discretization. The discretized process (4.1) is linear in $x_n(t)$ and $v_n(t)$ and thus closed form solution is given by a Gaussian distribution. The exact solution is given by (4.3). Also note that the integral in Algorithm 1 is tractable, e.g. $\int_0^{\alpha_n} A_{12}(s-\alpha_n)\nabla f(x_n)\mathrm{d}s = \int_0^{\alpha_n} A_{12}(s-\alpha_n)\mathrm{d}s \nabla f(x_n)$ and $A_{12}$ defined in Lemma 4.1 has closed-from integral. To implement Algorithm 1, one only needs to sample from a Gaussian distribution whose mean is tractable and the variance matrix is given by (A.3). We will specify the solution of the integral in Algorithm 1 and the covariance in (A.3) in a later revised version.
● Second, the sample complexity scaling as $\epsilon^{-2/3}$ is not completely convincing. […] can degrade after the integral discretization in Alg. 1 is taken into account.
We would like to clarify that the ridge separable case is an illustration that $\mathrm{tr}(H)$ can be dimension independent. The analysis is not restricted to the ridge separable case and reduces the dimension dependence for a wide range of problems that have rapidly dropping Hessian eigenvalues. One can refer to our discussion at the end of section 4.1.
And Mangoubi et al (2017) do not assume the ridge separable structure. They assume in assumption 1.7 that the potential $U(x)$ can be separated according to the blocked state space, that is $U(x) = \sum_{i=1}^{d/m}U(x_{i})$ where $x_i\in\mathbb{R}^{m}$. Although Mangoubi et al (2017) prove that HMC achieves $d^{1/4}$ or $\epsilon^{1/4}$ rate, the convergence guarantees require higher-order discretization schemes. These higher-order ODE solvers require higher-order smoothness which is not necessary in our algorithm. Such as the leap-frog scheme requires third-order smoothness to obtain the guarantee.
$\left(\frac{d}{\epsilon}\right)^4$ can be achieved via higher-order Langevin schemes by Mou et al. (2021) (High-order Langevin diffusion yields an accelerated MCMC algorithm) for ridge separable case. However, it is based on a different oracle $\Delta U$ other than the gradient oracle considered in our algorithms. And the convergence has a much worse dependence on conditioning number. The difference oracle $\Delta U$ is usually intractable. For general target distributions, they require numerical integration and the convergence rate degrades.
As for the optimality, the discretization scheme with convergence rate $\epsilon^{-2/3}$ is known to be optimal for underdamped Langevin algorithms [Cao et al. (2020)] (Complexity of randomized algorithms for underdamped Langevin dynamics). It is a discretization lower bound.
● Is it possible to add any numerical findings illustrating the superiority of the doubly randomized ULA (with random step size) against the one with constant or decreasing step size?
We have conducted some synthetic studies to show that our DRUL (with random step size) can achieve better mixing distribution compared to the randomized midpoint with a constant step size. Here we illustrate the dimension dependence using synthetic data. We consider Bayesian ridge regression with $g(w) = 0.1\|w\|^2/2$ and $f(w) =\frac{1}{2n} \|Xw - Y\|^2$, where rows of $X$ have unit norm. $f(x)$ has dimension-independent trace. We evaluate the result by a projection to one dimension and calculate the empirical Wasserstein distance (since it is non-trivial to obtain precise high dimensional distance using empirical data). We fix the step size and compare the mixing result. We take the average of 3 independent evaluations. Since the algorithms achieve high precision, we require a large number of samples (500000) to get an accurate estimate of Wasserstein distance even in one dimension. It is a huge computation cost in high-dimension regime. Thus we fixed a large step $h=1$ (under which the mixing step is low) size and $T = 300$ steps to guarantee mixing. Below is the results, where we compare the result of the randomized midpoint method (RMM) and our DRUL method when the dimension $d = 5,10,100,1000$.
| Dimension | 5 | 10 | 100 | 1000 |
|-----------|--------|--------|--------|--------|
| RMM | 0.0084 | 0.0083 | 0.0180 | 0.0416 |
| DRUL | 0.0085 | 0.0071 | 0.0050 | 0.0077 |
From the table, DURL achieves more accurate mixing distributions in high-dimension space, thus showing superiority as the dimension grows. Note that by directly sampling from the target distribution 500000 times, the empirically estimated distance from the target distribution is around 0.001-0.005. DRUL method may achieve better results than the ones shown in the table.
As a reminder, our work is not a stochastic step size Midpoint method. The stochastic step size Midpoint method will induce a differential $\rho’$ that does not satisfies claim (B) in Lemma 5.2. Besides, the decreasing stepsize also applies to our method.
● Are there any novel technical contributions developed to prove the result of Theorem 4.2? If yes, please add the corresponding discussion to the main text.
We develop a new discretization scheme and bound the error-dependent term by an averaged effect as discussed in Section 5. The analysis requires delicate control and devising on the choice of $\rho$ and $\rho'$. And our analysis bounds the local error in the process, which leads to our choice of $\rho$ and $\rho'$. We will make a more clear and thorough discussion in a revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed answer. Now I am more convinced with the theoretical contributions of the paper and I would raise my score. | Summary: The paper adapts the Randomized Midpoint Method to the composite optimization context considered in Freund et al, and consequently improves the dependence from $O(tr(H)/\epsilon)$ to $O((tr(H)/\epsilon)^{1/3})$.
Strengths: The application of randomized midpoint in this composite sampling is novel, and there is genuine improvement in the rate estimate when compared to Freund et al.
The technique of double randomization is new and requires some novel analysis when contrasted with prior works.
The authors do a decent job of illustrating that the trace of the Hessian is $o(d)$ through some figures and discussion.
Weaknesses: The primary contributions of this paper are not particularly original and mostly stem from combining the framework in Freund et al. with the known analysis for randomized midpoint in Shen and Lee. This in my view is the primary weakness of the paper.
In general, claims about the “dimension-free” nature of the convergence guarantees need to be careful since the composite structure assumption is quite strong, although the authors in general do a good job of qualifying their claims.
Overall, while this work contains some novel claims and results, and is a bona fide improvement on prior work. However, the technical novelty is not significant, and I am borderline on this paper as a result.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: It seems inappropriate to compare this to the original randomized midpoint work/other work for the standard Langevin Monte Carlo, since they do not assume the composite structure of the problem. The primary highlighted comparison is with respect to Freund et al. (2022), in which case the gain is more like $tr(H)^{1/3}/\epsilon^{4/3}$. If $tr(H)$ is $O(1)$ then this is only a gain in epsilon, which is usually smaller than $d$.
What is the previous proof referred to in L. 253?
Terminology of “acceleration” should probably be avoided since this is classically used only to refer to sqrt(kappa) rates. A better term might be “improved discretization error”.
Typos:
L. 140 What is being made strongly convex?
L. 185 squred -> squared
L. 197 denote solution -> denote the solution
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: I have outlined my concerns already in the previous sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ● The primary contributions of this paper are not particularly original and mostly stem from combining the framework in Freund et al. with the known analysis for the randomized midpoint in Shen and Lee. This in my view is the primary weakness of the paper.
We would like to emphasize that our method is not a simple combination of the previous work.
First, the two-stage optimization in Freund et al. cannot be directly extended to the accelerated process or complicated discretization scheme due to its form. Meanwhile, the midpoint discretization analysis introduces dimension-dependent errors such as in line 447, which does not exist in the analysis in the EU discretization of the overdamped Langevin analysis and underdamped analysis. This requires refined discretization schemes to eliminate the dimension-dependent error and thus there are underlying difficulties to extend to complicated discretization schemes. We design a discretization scheme and prove the scheme can achieve a convergence rate with $\mathrm{tr}(H)$ dependence. And this can be achieved for methods based on both accelerated processes and complicated discretizations. This implies that a wide range of Langevin algorithms can be adapted to achieve low dimension dependence.
● In general, claims about the “dimension-free” nature of the convergence guarantees need to be careful since the composite structure assumption is quite strong, although the authors in general do a good job of qualifying their claims.
Sorry for the ambiguity of our statement. We would like to clarify that our analysis is not restricted to composite structure and applies to general strongly convex target distributions. As discussed in section 3.2, the composite structure also includes the general $m$-strongly convex function, which can be divided into $g(x) = \frac{m}{2} \|x\|^2$ and the weakly convex function $f (x) = U (x) −\frac{m}{2} \|x\|^2$. This leads to the same $\frac{\mathrm{tr}(H)^{1/3}}{\epsilon^{2/3}}$ convergence rate. One of the purposes of introducing composite structure in the proof is to alleviate the dimension dependence in the Hessian trace. The composite structure does not serve as the prerequisite of the analysis. Otherwise, the $\mathrm{tr}(H)$ will have an extra $md$ term. The scaling of $md$ is problem specific and may be dimension dependent when $\frac{1}{m} = o(d)$. The natural composite structure is ubiquitous in sampling tasks.
● It seems inappropriate to compare this to the original randomized midpoint work/other work for the standard Langevin Monte Carlo, since they do not assume the composite structure of the problem.
● L. 140 What is being made strongly convex?
As discussed above, the algorithm applies to the general strongly convex distributions. $U$ is strongly convex. We will discuss the composite structure more in a further revised version.
● What is the previous proof referred to in L. 253?
The proof in Shen and Lee (2019). We follow the seminal contraction-based method by Shen and Lee (2019) and Cheng et al. (2018). This is to illustrate the difference of the proof to one familiar with the proof by Shen and Lee. Our analysis tracks the difference between the implemented process and the exact process with the initial distribution corresponding target distribution, and considers to bound the local error in the process. It is consistent with the proof based on the gradient flow.
● Terminology of “acceleration” should probably be avoided since this is classically used only to refer to sqrt(kappa) rates.
Thanks for pointing out this issue. Acceleration is indeed improper.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their detailed response. Having read through the comments by the other reviewers and the subsequent discussion, I remain borderline since I am still skeptical that this paper presents a sufficiently novel contribution to merit acceptance. However, there does seem to be some genuine analytical novelty arising from the analysis of the discretization error, and I have raised my score by one point as a result.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thanks again for the suggestions and the recognition of our work. Please let us know if you have more questions or comments. | Summary: In this paper, the authors propose a Langevin-type algorithm for sampling a strongly log-concave distribution with a composite structure. Their method can be viewed as a variant of the randomized midpoint method, with two key modifications: (i) they only discretize the smooth convex part of the negative log likelihood but retain the strongly convex part; (ii) they draw both step sizes in the algorithm randomly according to carefully crafted distributions. It is shown that the algorithm achieves an accelerated rate without explicit dependence on the dimension.
Strengths: - The result is interesting and noteworthy. To sample a strongly log-concave distribution, the best-known iteration complexity bound either has an undesirable dimension dependence, such as $\tilde{O}(\frac{d^{1/3}}{\epsilon^{2/3}})$ in [Shen and Lee, 2019], or a worse dependence on $\epsilon$, such as $\tilde{O}(\frac{\mathrm{tr}(H)}{\epsilon})$ in [Freund et al., 2022]. In this work, the authors manage to achieve the best of both worlds and prove a complexity bound of $\tilde{O}(\frac{(\mathrm{tr}(H))^{1/3}}{\epsilon^{2/3}})$.
- The authors introduce the double randomized technique to reduce the discretization error, which seems novel to me.
Weaknesses: I think the presentation of the paper can be improved.
- In particular, the explanation in Section 5 is not very helpful, and it remains unclear to me why the randomized step size helps reduce the discretization error, and why the authors choose the specific distribution in Lemma 5.2. Moreover, it would be helpful if the authors can compare their analysis with the one in [Shen and Lee, 2019] to better explain how they remove the dimension dependence.
- Also, there are numerous typos in the main text and the proofs in the appendix, which sometimes make it hard to understand. Please see the "Questions" section for more details.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - While this work focuses on the dimension dependence, the condition number $\kappa = L/\mu$ can also impact the convergence rate greatly. How is the result in this paper compared with the existing works in terms of the dependence on $\kappa$?
- In Lemma 5.1, it is unclear to me what it means that "$x_n$ and $x_n^*$ are coupled synchronously". Do you mean that they are driven by the same Brownian process?
- Page 9, Lines 277-281: I am confused by this paragraph. By definition, isn't the random weight $w_n(s,\alpha)$ a $d$-dimension vector? If so, how can you apply Claim (A) in Lemma 5.2?
- Page 9, Lines 282-292: It is also unclear to me why "the randomized step size make it possible to consider the averaged effect".
- Page 12, (A.1): I am not sure why the authors introduce the extra parameter $\gamma$. As far as I can see, $\gamma$ is fixed as 2 in the rest of the proof.
- Page 14, Section B.1.1: I don't see why the random process $B_t^{\alpha}$ is a Brownian bridge. Is it supposed to be the random process $B_t$ conditioned on $B_{\alpha}$? And why do we need to introduce this process in the first place?
- Page 14, the equations under Line 431: Here the authors exchange the order of differentiation and expectation, which is not justified. Indeed, it is not even clear if $\Omega(t)$ is differentiable since it involves the solution trajectories of SDEs.
Typos in the paper:
- The convergence rates reported in the introduction are inconsistent with the ones in Table 1. Specifically, the rate by [Shen and Lee, 2019] should be $\tilde{O}(\frac{d^{1/3}}{\epsilon^{2/3}})$ (Page 2, Line 47), and the rate by [Freund et al., 2022] should scale linearly with $O(\frac{1}{\epsilon})$ (Table 1, Row 5).
- Page 1, Line 49: "convergence dependence" -> "dimension dependence"
- Page 5, Definition 3.6: $A$ is undefined.
- Page 9, Line 275: the integral should be $\int_{0}^t e^{\frac{s-t}{\kappa}} F(s) ds$.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the authors addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Page 9, Lines 282-292: It is also unclear to me why "the randomized step size makes it possible to consider the averaged effect".
● In particular, the explanation in Section 5 is not very helpful [...] better explain how they remove the dimension dependence.
In short, the introduction of the random step size is to bound the error such as $\|\int_0^t x_n(s) - x_n^*(t)\mathrm{d} s\|^2$ in the discretization analysis. The $\|\int_0^t x_n(s) - x_n^*(s)\mathrm{d} s\|^2$ may be dimension-dependent as a standard bound will lead to a $dt$ additional discretization error. We seek to bound this kind of error using the descent on $\Omega_n(t)$. Under the stochastic step size, $\Omega_{n+1}$ will be $ \mathbb{E} \Omega_n(t) $, and thus we can bound the dimension-dependent discretization error as $\|\int_0^t x_n(s) - x_n^*(s)\mathrm{d} s\|^2 \lesssim \mathbb{E}\Omega_{n}(t)$. And it remains a careful analysis to obtain the dimension-free error and delicate device of $\rho, \rho'$.
● While this work focuses on dimension dependence, the condition number $\kappa=L/μ$ can also impact the convergence rate greatly. How is the result in this paper compared with the existing works in terms of the dependence on $\kappa$?
Thanks for your suggestions. This paper mainly considers dimension dependence. It is very interesting to study the dependence on $\kappa$. However, we currently do not take a very careful analysis of it. We have achieved a $\kappa^{4/3}$ dependency. We think it is possible to improve this dependence using the technique in Cao et al. (2019) (On explicit $ L^ 2$-convergence rate estimate for underdamped Langevin dynamics). And we will work on it in the near future.
● In Lemma 5.1, it is unclear to me what it means that "$x_n$ and $x_n^∗$ are coupled synchronously". Do you mean that they are driven by the same Brownian process?
It exactly means that they are driven by the same Brownian motion.
● Page 9, Lines 277-281: I am confused by this paragraph. By definition, isn't the random weight $w_n(s,\alpha)$ a $d$-dimension vector? If so, how can you apply Claim (A) in Lemma 5.2?
Thanks for pointing out the ambiguity. Line 277 illustrates how our choice of $\rho$ and $\rho’$ inherits the midpoint discretization. And Claim (A) shows that the match of expectation is critical for obtaining this acceleration. In rigorous proof, we apply (A) of Lemma 5.2 with $F(s) = \langle w_n(0, \alpha), \nabla f(x_n(s)) - \nabla f(\hat{x_n}(\alpha)) \rangle$. And this analysis neglects a discretization error on $\bar{w}$ and $w_n(s, alpha)$ as discussed in line 280. And it is supposed to be an inner product in line 279. We will make it clear in a further revised version.
● Page 12, (A.1): I am not sure why the authors introduce the extra parameter $\gamma$. As far as I can see, $\gamma$ is fixed as $2$ in the rest of the proof.
Thanks for raising this issue with us. We indeed fix $\gamma = 2$ in the rest of the proof. We will specify that $gamma = 2$ in a further revised version.
● Page 14, Section B.1.1: I don't see why the random process $B_t^{\alpha}$ is a Brownian bridge. Is it supposed to be the random process $B_t$ conditioned on $B_{\alpha}$? And why do we need to introduce this process in the first place?
$B_t^s$ is a Browning ridge which is conditioned on $B_{\alpha}$. And the process is the SDE of the Brownian bridge. We introduce $B_t^s$ for technical considerations. The diffusion process in line 428 is an adapted process conditioned on $B_{\alpha}$. And process $(x_n(t), v_n(t))$ driven by $B_t^{\alpha}$ is not adapted to the filtration when $t < \alpha$.
● Page 14, the equations under Line 431: Here the authors exchange the order of differentiation and expectation, which is not justified. Indeed, it is not even clear if $\Omega(t)$ is differentiable since it involves the solution trajectories of SDEs.
Thanks for raising this to us. The exchange of differential and expectation requires further validation. And a rigorous proof will be considering the evolution of the process not in expectation but only $\mathrm{d} \Omega(t)/\mathrm{d} t$ and then taking the expectation in line 437. And in line 437, we take the differential of $\Omega(t)$ using Ito’s formula.
● The convergence rates reported in the introduction are inconsistent with the ones in Table 1. Specifically, the rate by [Shen and Lee, 2019] should be $\tilde{O}\left( \frac{d^{1/3}}{\epsilon^{2/3}} \right)$ (Page 2, Line 47).
Thank you for pointing out this issue. It is indeed $\epsilon^{2/3}$ in the denominator.
● And the rate by [Freund et al., 2022] should scale linearly with $O\left( \frac{1}{\epsilon} \right)$ (Table 1, Row 5).
We compare the convergence rate in Wasserstein distance whereas the $\frac{1}{\epsilon}$ rate is proven in KL distance, which implies a $\frac{1}{\epsilon^2}$ rate in Wasserstein distance.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. It addresses some of my questions, but some parts remain unclear and ambiguous to me. In particular, I still don't have a good understanding why the random step size enables one to bound the discretization error $\\|\int_0^t x_n(s) - x_n^*(s) ds\\|^2$ in terms of $\mathbb{E} \Omega_n(t)$, which seems to be the key to obtaining dimension-free bounds.
After reading other reviews, I remain overall positive about the paper and decide to keep my score. On the other hand, the presentation of the paper has much room for improvement, and I strongly encourage the authors to include more detailed explanations on the key techniques and highlight the necessity of randomized stepsize scheme in the revision. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods | Accept (oral) | Summary: This paper provides a theoretical framework based on generalized variational inference [1] and Wasserstein gradient flows (WGF) for analyzing deep ensemble methods and their regularized versions. The authors demonstrate that deep ensembles and other variational Bayesian methods can be cast as instances of an infinite dimensional variational inference problem and the WGF of different instantiations of a free energy functional. The authors additionally use their theoretical framework to derive a new algorithm for generating samples from a target distribution.
[1] Knoblauch, Jeremias, Jack Jewson, and Theodoros Damoulas. "An optimization-centric view on Bayes’ rule: Reviewing and generalizing variational inference." Journal of Machine Learning Research 23.132 (2022): 1-109.
Strengths: The paper is well-organized and written, and the benefits of the unifying theoretical framework are compelling. The discussion of how deep ensemble methods can be viewed through the lens of WGF and the use of this lens to prove theoretical guarantees on the limiting behavior of particle estimations is useful and insightful. This work also holds the promise of deriving new algorithms, as demonstrated by the deep repulsive Langevin ensembles presented in Section 4.3.
Weaknesses: #### **Experiments section is difficult to follow**
While the details corresponding to the various figures in Section 5 are fully provided in Appendix G, this section is currently difficult to follow as a stand-alone section in the main paper. Without (even high level) details on the experimental setup and the general motivation of each experiment it is difficult to dive right into the results and Figures as they are currently presented. I recommend moving some details from Appendix G into the main text and providing the context for each experiment before diving into the results.
---
#### **Motivation for convexification is unclear**
While the authors prove that convexity of the infinite-dimensional variational form of the learning objective guarantees uniqueness of a minimizer, this is somewhat disconnected from the presented goal of optimizing $\ell(\theta)$ via probabilistic lifting. For example, in footnote 1 on page 2, the authors argue that the unregularized variational objective has non-unique optimum. However, the local optima all have equivalent values of the objective and are simply weighted averages of equivalent optima of $\ell$, hence it is not clear why uniqueness is a desiderata here.
Additionally, Figure 2 in Section 5 demonstrates how deep ensembles (DE) do not converge to $Q^*$. However, although deep Langevin ensembles (DLE) and deep repulsive Langevin ensembles (DRLE) provable converge $Q^*_{DLE}$ and $Q^*_{DRLE}$, respectively, these optimal distributions are also not equal to $Q^*$.
Hence a clearer exposition as to why regularized optima are preferred to the unregularized ones is needed.
---
#### **Motivation for DRLE is lacking**
While DRE / DRLE is indeed interesting as a new algorithm that can be derived from the presented theoretical framework, it would be great if the authors also provided some intuition / motivation as to why MMD is perhaps a better suited divergence regularizer than KL.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In Lines 188-191, the authors state that
> In theory, the PDE in (4) provides us with a direct way of implementing infinite-dimensional gradient descent for (1): simply follow the WGF. In practice however, this is impossible: **numerical solutions to PDEs become computationally infeasible for the high-dimensional parameter spaces which are common in deep learning applications.**
I am unclear what is meant by this. Why is the high-dimensionality of deep learning parameterizations relevant here? Isn’t this problem simply impossible since it involves an infinite-dimensional space?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Authors can potentially elaborate on the future directions of this work, specifically around analyzing the approximation errors of approximating WGF with finite number of particles over a finite time horizon.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### Details on experimental section
As the reviewer notes themselves, the paper is already rather densely packed. Because of space limitations, we were not able to include all relevant experimental details in the paper. We chose to focus on methodological details, but will move as many details into the main part in accordance with the available space in the final version.
#### Motivation for convexification
We thank the reviewer for their comment. One needs to distinguish between two questions. The first one relates to the uniqueness of the minimiser of the optimisation problem. The point that we try to make with the footnote is simple: without regularization, the optimization problem can in general not have a unique minimiser. This relates to the bigger question that the reviewer raises: Why should one care about uniqueness? The main problem with non-uniqueness is that this typically translates into undesirable properties of the inference algorithm—for example, unless the solution is unique, it is not even clear which solution (of the many possible ones) an algorithm should target. This is illustrated by Theorem 1: the asymptotic distribution of Deep Ensembles depends strongly on initializations and how large the domain of attraction for a given local minimum is. Therefore, if we initialize poorly, we may end up putting most particles close to a comparatively poor local minimum.
Identifying a unique target to aim for, which in our case is $Q^*$, is typically seen as the first step in designing an inference algorithm with theoretical guarantees—after all, unless the target is clearly defined, it is not generally possible to assess an algorithm’s performance. Once the target has been identified, the second step is to build an inference algorithm that finds it.
Regarding the interpretation of Figure 2: There may be some confusion about what $Q^*$ is in the plot. The dotted line in the background is the (unique and optimal) $Q^*$ of the optimisation problem; and the histogram clearly shows that DLE and DRLE do generate samples from this global minimiser (in line with our theory). On the other hand, DE does not (!). While it would be possible to find an initialisation distribution $Q_0$ such that DE converges to its global minimum, we would have to know where the global minimum (and its region of attraction) are located to do this—and this is precisely the kind of dependence on initialisation that is undesirable! DLRE and DLE converge to their respective optimal $Q^*$ regardless of initialization, thereby losing this undesirable trait. Importantly, note that this would be impossible to achieve if we hadn’t guaranteed the existence of a unique minimizer through convexification.
#### Motivation for DRLE
We thank the reviewer for this great question. It is a point that we will discuss more thoroughly in the final version of the manuscript. The MMD is based on the kernel $\kappa$ which introduces a repulsive effect between different sets of particles. This means that we can essentially choose our own metric that compares different parameter vectors (i.e. particles) with each other. It now depends heavily on the application, which sets of parameters should be considered ‘similar’. We chose a very basic kernel, the squared exponential, which essentially just measures the euclidean distance between the parameter vectors. However, in certain applications one might have a very good understanding of the type of diversity one would want to encourage and this knowledge could be embedded in the kernel $\kappa$. We considered a thorough investigation of the effects of the kernel to be beyond the scope of this paper, but it is an interesting avenue for future research.
#### Questions
[PDE solvers] We are grateful the reviewer pointed out that this sentence is misleading. We mean the following: Once we have a closed form expression for the Wasserstein gradient (as for example in the KL and MMD case) the PDE in (4) gives us a way to determine the pdf of $Q(t)$ at any time $t$. We can simply apply numerical PDE solvers to find an approximation of $q(t,\theta)$. However, PDE solvers quickly become computationally infeasible if the input space is larger than 2 or 3. As the parameter space in deep learning is typically at least in the thousands and more often than not in the millions or billions, it is computationally infeasible to deploy numerical PDE solvers.
#### Limitations
This is an excellent point. We have written a response to this in the general rebuttal point 3.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for the thorough response. I do not have any additional comments or questions at this time. | Summary: The paper established theoretical connections between ensembling an old and established method of deriving uncertainty estimates They use theory from iteraction of particles in a thermodynamic system to generalise and connect seemingly different ways of ensembling and Variational Bayes(Inference) methods. This is done by formulating the original non-convex optimization problem ubiqitious in ML and stats as infinite dimensional convex optimization problem in the space of probability measures. The addition of a regularization quantity ensures the strict convexity of the problem and by choosing different forms of this quantity lead to derivation of various inference algorithms.
Strengths: 1. The paper is well written, theory heavy and addresses important topic of deep ensembling and its connection with variational Bayes methods
2. I am not so good with theory, but the theorems and equations looked ok to me without obvious mistakes.
3. Although this is a theory paper, the theoretical claims are well supported by the experiments and where they are not the authors they explain it well.
4. The distinction between IDGVI and FDGVI is well drawn out and explained. Also the limitations with FDGVI that the approximation family is limited by construction serves as a motivation for using IDGVI methods.
Weaknesses: 1. There is a lot of content that has been compressed in 9 pages which can be challenging for a reader, and a journal might have been more appropriate for this work.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. in practice, deep networks use stochastic gradients, does the theory hold for stochastic gradients, esp. as Section 2 explicitly uses gradients for motivation.
2. For case of FDGVI, the authors do not consider normalizing flows or SIVI as methods to overcome the limited capacity of approximating family problem when comparing it with IDGVI.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The limitations or practical challenges with the derived inference algorithms can be addressed. How practical is the result from Theorem 2, is it something that will only work asymptotically or will this work practically and if so how fast ?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### Questions
1. This is indeed an excellent question. It is true that in practice, we will need to replace full gradients by mini-batch versions and for example the kernel mean embedding with its monte carlo estimator. The reviewer correctly observes that the theoretical results in Section 4 do not account for this type of approximation and sub-sampling. However, this is not unique to our analysis, and a rather common simplification. More importantly, we believe that the results built by ignoring this added complication already provide the most important insights into the differences between various algorithms and can be used as a starting point for further theoretical investigations.
More precisely, we think of the Wasserstein gradient flow as a powerful and principled tool to derive new inference algorithms for different types of regulariseres (MMD/KL and maybe others in the future). We can then combine it with standard plug-in estimators (like mini-batch gradient estimators) to immediately obtain an algorithm that approximately performs gradient descent in infinite dimensions.
2. We thank the reviewer for pointing us to these alternatives for VI. We have included references for VI via normalizing flows and SIVI in the main text. To the more general point: It is indeed true that one can make the class of variational inference more expressive, but this comes with a trade-off: If we use a more expressive approximating family, the KL-divergence term is typically not available in closed form anymore, which means we need to introduce approximations for the regulariser. Furthermore, the resulting optimization problems for the variational parameters are still highly non-convex and therefore often depend heavily on good initialisations. We are not aware of any works in the FD-GVI literature that obtain results competitive with deep ensembles. We believe that this is a direct consequence of the above problems.
#### Limitations
The reviewer raises an excellent point. Results such as the one presented in Theorem 2 are only asymptotic in nature, and consequently we have no guarantee that the convergence is fast enough. These results should therefore be seen as necessary but not sufficient conditions for a good inference algorithm. Yet, stronger results which quantify the speed of convergence, would require us to make extremely strong assumptions that will be violated for deep learning. For example, we could trivially obtain quantitative results by citing the relevant literature albeit under very strong assumptions that would never be satisfied in the context of deep learning. Section 11.2 of Ambrosio et al. (2005) shows that if $L$ is lambda-convex along generalized geodesics, we obtain exponentially fast convergence. Lambda-convexity of $L$ can be guaranteed if the potential $V$ is strictly convex (cf. Section 9.3 of Ambrosio et al. (2005)), which is surely never satisfied in deep learning applications.
The final version will make this difference between qualitative and quantitative results clearer, and discuss the strong assumptions required to obtain quantitative results.
However, we want to point out that standard parameterized FD-GVI comes with no guarantees of any type. Although qualitative results have their shortcomings, they at least show that the inference algorithm in principle is powerful enough to solve the optimization problem at hand.
#### References
Ambrosio, L., Gigli, N., and Savare, G. (2005). Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks to the authors for replying to my questions. I am quite satisfied with their detailed comments to my questions and other reviewers' questions. I recommend a strong acceptance. | Summary: The authors propose to unify existing theory on Bayesian (variational) inference (VI) by addressing a generalized objective, which is obtained from standard parameterized loss minimization by “probabilistic lifting” (re-casting in a space of probability measures over the parameter) and “convexification” (ensuring the existence of a global minimizer by regularization), with infinite-dimensional gradient flows in 2-Wasserstein space. A general recipe is provided to implement such a Wasserstein gradient flow (WGF) via an energy objective and a system of interacting particles. In the key contribution of the paper, the authors study WGF with different types of regularization–most notably, the unregularized version corresponding to deep ensembles (DE). It is shown that DE do not conduct a Bayesian learning procedure and systematically fail to generate samples from the optimal distribution, yet perform competitively thanks to the flexibility of the infinite-dimensional inference they realize (as opposed to, e.g., classical parametric VI).
Strengths: * [S1] **Unifying framework**. After much discussion in the past few years, the authors are–to the best of my knowledge–the first to establish a comprehensive theory that encompasses (finite-dimensional) VI and DE.
* [S2] **Clarity**. Despite the rather abstract subject, the authors present a coherent and easy-to-follow sequence of arguments. Complexity is strictly limited to the necessary extent.
* [S3] **Rigor**. Mathematical concepts and notation are sound. Extensive proofs and/or references to prior work underline every claim (though I did not check every proof in detail).
Weaknesses: * [W1] **Analysis of DE behavior** (see Questions)
* It is not entirely clear if the paper studies arbitrary variants of DE or only a very narrowly defined version (see Q5).
* The authors conjecture that the number of samples being vastly smaller than the number of local minima is responsible for D(R)LE not outperforming DE consistently and point to Fig. 4. This evidence seems rather anecdotal and could benefit from a more detailed investigation. Also see Q7--Q8.
* [W2] **Omissions in notation**. While the notation is consistent and comprehensible overall, the authors tend to omit integration domains, objects of differentiation etc. (e.g., Eq. 1, l. 150, l. 177, l. 179). With the shifting of integration spaces and various gradients involved, it would be helpful to be as explicit as possible in this regard.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * [Q1] Eq. 6: What does the index $j$ relate to?
* [Q2] l. 218: Are there any convergence results in the respective limits of $T$ and $N_E$?
* [Q3] l. 223: The experiments are promised to confirm small approximation errors due to finite samples and time, in particular in comparison to finite-dimensional methods. Where, exactly, do I find evidence for this claim?
* [Q4] l. 236: Just for the sake of clarity, is $\theta^\prime_n(t) = - \nabla \ell(\theta_n(t))$ equivalent to $d \theta_n(t) - \nabla V(\theta_n(t)) dt$?
* [Q5] l. 237: Do I understand correctly that you interpret DE as training with no regularization whatsoever (weight decay, batch normalization etc. – let alone variations with weight sharing and the like)? I doubt that many researchers actually apply such a decidedly naive approach.
* [Q6] l. 253: Can DE implementing infinite-dimensional GD be understood as taking a non-parametric/functional approach as to what the distribution of the generated samples looks like (as opposed to, e.g., mean-field VI with a finite parameter vector)?
* [Q7] Fig. 2: Is there an explanation why DE exhibits this precise 50/50 spread of the probability mass?
* [Q8] Fig. 4: I’m not sure I understand the key message here. Why do the particles end up in the same modes despite different $Q^\ast$? What would be the expected behavior?
—
Minor remarks
* l. 53: Redundant “i” in “probability”
* l. 72: Shouldn’t $D$ provide the mapping $(Q, P) \mapsto D(Q, P)$ according to the stated domain?
* l. 170: I feel enough space can be freed to include the definition of the 2-Wasserstein metric, it seems an odd choice to omit this arguably relevant information.
* l. 292: Is there a $d$ missing in front of $Q(\theta)$ in the first double integral?
* l. 297: $L^{FE}$ with capitalized superscript ($L^{fe}$ otherwise).
* l. 329: White space after “minimisers”
* l. 330: Remove either “which” or “that”
* l. 337: “matters”
* Fig. 3 (caption): White space in “FD-GVI”
* Table 1: I would recommend removing the “Boston” dataset due to its racism issues. Also: “methods… outperform” or “method… outperforms”.
* l. 355: “lens”
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Given that the main contribution is a unifying framework for existing theories, this point doesn’t apply as usual. However, the authors should state more clearly that the evidence shown in Section 5 for findings in Section 4 is quite limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### Weaknesses
1. One of the aims of this paper is to show that the naive strategy of train-and-repat can be understood as Wasserstein gradient flow of the probabilistic lifting of the loss function. Which strategies are covered depends on a case-by-case basis. Weight-decay for example is covered by our theory, as it just means that $\ell$ is given as MSE+$\lambda |\theta|^2$. Batch-normalization is also covered in the sense that an ensemble of neural-networks which implement batch-normalization can be understood through the same WGF-lense. However, we did not want to give the impression that DEs are a bad strategy (even in their most basic form). Quite the opposite: we stress throughout the paper that they can be interpreted as implementing a rather sophisticated version of gradient descent and that especially in the multi-modal loss-landscapes of deep learning it is hard to come up with a better strategy.
2. Thank you for pointing this out, we will make a more concerted effort at explaining this in the updated manuscript. As our theory shows, a key difference between DE and D(R)LE is the form the solutions take: with infinitely many particles, D(R)LE produces a unique probability measure that has a continuous density (see Figure 2). In contrast, Theorem 1 shows that DEs produce probability measures that have atomic support: they have probability measure zero almost everywhere—except at a few individual points. In a sense, DEs are an extremely sparse representation of parameterisations of the neural network we care about. This is different for the measures produced by D(R)LEs: (with infinitely many particles) because they are densities, they try to assign probability mass to all regions of the parameter space. It is reasonable to believe that this would lead to a less sparse/better representation of parameterisations of the neural network we care about. Figure 2 supports this idea: in the setting where there are many more particles than minima, we can cover the parameter space of the neural network well—and in these settings, the purely atomic nature of DEs would generally be a drawback. To show this, we sampled the initial values for the parameters for the DE uniformly between -2.5 and 2.5. Since there are two minima in Figure 2 whose basins of attraction for gradient descent are around [-2.5, 0] and [2.5,0], a uniform initialization leads to the exact 50:50 spread.
However, in practice, neural nets don’t look like Figure 2—they have lots and lots of minima, and we have very few particles to try and approximate the densities that look so clean and regular in Figure 2. That’s why we include Figure 4: it shows what actually happens in this scenario. If the number of particles is far lower than the number of minima, the particles in D(R)LE are not enough to be a good approximation to a density of this type of space. Instead, they drift to the closest minima (exactly like they would in DEs!) and get stuck there for a long time. To show this phenomenon even more clearly, we have attached the original picture in Figure 3 together with a variant of the same experiment that uses only four particles to approximate the D(R)LE measures to this review. As is clear from that picture, when the number of particles is small relative to the number of minima, a naive interpretation of the theory is misguided; and D(R)LE behaves very similar to DEs. This is consistent with how the underlying equations evolve: When we discretise the evolution equations, what we get are basically slightly modified and randomized gradient descent schemes (see e.g. eq (7)). Theory (and experience from Langevin sampling for Bayesian posteriors) shows that as one keeps evolving the particle ensemble long enough, they will eventually escape and behave sufficiently differently from DEs—but in deep learning, we will not have the resources to keep the processes running forever. The naive expectation that we would get the type of behavior we see in Figure 2 in neural networks is therefore voided—instead, we should expect that D(R)LEs will act very similarly to DEs.
3. Thank you for pointing this out; we will improve the paper and make these points clearer. Specifically we will explain that all integrals are over $R^J$ and all gradients are with respect to $\theta$.
#### Questions
1. The number of ensemble members is $N_E$ (e.g. $N_E=10$). This means we train 10 neural networks where each neural network has its own set of parameters. The index $n$ therefore would run from 1 to 10 and denote the set of parameters corresponding to the n-th neural network.
2. Yes, indeed. In Section 4.2 and Theorem 2 we show that under certain conditions our intuitions are validated and we obtain convergence results.
3. In Figure 2 we illustrate that the samples we obtain from implementing our method for DLE and D(R)LE are close to the optimal measures.
4. Yes, this is indeed the same.
5. See answer to Weakness 1.
6. Yes. Commonly, ‘non-parametric’ methods refer to the setting where the number of parameters can be arbitrarily high, and where in the limit of infinitely many parameters, one can recover an (infinite-dimensional) truth. In the context of our paper, the non-parametric object is indeed the targeted distribution itself, while the number of particles (of DE or D(R)LE) can be seen as the (arbitrarily large) number of parameters. As our theory shows, if the number of particles/parameters goes to infinity, we can recover the infinite-dimensional object we are targeting. This is different from parametric/parameterised problems, where there is not a natural way of making the parameter space arbitrarily large. E.g., typical variational families such as the family of normal distributions is parametric—but the collection of normal mixture distributions (with arbitrarily many and potentially infinitely many mixture components) would be non-parametric.
7. See Weakness 1.
8. See Weakness 2.
#### Limitations
See global response point in 4. | Summary: To improve the accuracy of the uncertainty quantification, the authors aim to provide a mathematically rigorous link between Bayesian inference, Variational Bayes methods and ensemble methods. In this work, methods s.a. variational inference, Langevin sampling and deep ensembles can be seen as particular cases of an infinite-dimensional regularised optimization problem formulated via Wasserstein gradient flows. They also provide a novel inference algorithm based on MMD and gradient descent in infinite dimensions plus regularisation.
The procedure takes place by reframing the usual finite-dimensional loss function problem into an infinite-dimensional one. This is done rewriting the original optimization problem using an infinite-dimensional problem over the set of probability measures $\mathcal{P}(R^J)$ and introducing a strictly convex regulariser to induce a unique global solution. This solution is assumed to not be too different from the solution to the original problem, which is controlled by a reference measure $P$. This leads to an interpretation of many different inference setups as particular cases of the optimization problem proposed. Therefore, this approach results as a combination of the proposals in Knoblauch et al. (2022) and Ambrosio et al. (2005).
Strengths: * Good idea, could be interesting to the community were it proven in some other contexts.
* The formulation is clear and elegant thanks to the gradient flows and the usage of Wasserstein space. The usage of the thremodynamical formulation of free energy is very attractive as well.
Weaknesses: * Altough the proposal is interesting and elegant, I think the experimental part of the paper does not provide enough evidence of the benefits related to this framework change. Results such as those present in Figure 3 could, in principle, be rivaled by previous methods s.a. [1] and [4], neither of which are discussed here. The authors maybe could provide a stronger motivation in this regard, and maybe try to encompass these other methods inside their framework.
* Some literature relevant to the topic at hand seems to be missing from the discussion, or at least should be discussed more thoroughly:
* Regarding the definition of infinite-dimensional GVI methods, I consider that other methods based on samples are left out and should be considered, such as [1,2,3]. These works may seem specially relevant due to the interest in implicitly-defined target $Q^*$, and in particular those that make use of the function-space formulation s.a. [1] or [4].
* I think finite-dimensional GVI methods are misrepresented as they can be much more expressive than the selection made in Section 2.2 may lead to believe. I consider that this point should be addressed, and the discussion must be readjusted accordingly in order to highlight the benefits of the proposed approach without relying on this fact. As examples of this matter, please see references [4,5,6].
* The writing can be generally improved, since the paper can be at times hard to follow. This is just a consequence of the amount of information provided, which is a positive point, although in sections 3 and 4 could be polished further.
* (minor) The presentation could be improved, for example, by convering images to formulas s.a. in Figure 1 or the layout on the final page.
* (minor) Since Wasserstein spaces are such a crucial point of this, I would suggest devoting a bit more time to explain the basics of the concept in the main text itself and not fully depend on the sources.
(_References included in the "**Limitations**" section_)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
* Can this framework be used to describe methods that obtain an a-posteriori approximation of the predictive distribution via Laplace approximation or similar methods? As examples, please see [7,8]
* Since now inference is conducted without the guarantees provided by the Bayesian method, how should the distributions obtained be interpreted or used?
* I think other possible interesting regularisation choices would be Renyi divergences and also any proper scoring rule, defined in [9], which may solve issues related to the MMD and KL divergence.
(_References included in the **"Limitations"** section_)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations:
* Since the Bayesian framework is abandoned, I fear there are no guarantees about the properties for the distributions obtained in the same sense as with Bayesian inference. Although can be somewhat justified by results, a lot more work is needed in this regard in methods that rely on this extensions (which is a problem for this paper, although definitely not exclusive to it).
* The paper is centred on theoretical developments, and as such, the theoretical discussion and argumentation is really interesting. However, and although it is not the core of the paper, the experimental phase leaves a lot to be desired in terms of justifying why this formulation change is needed.
---
**References**:
[1] Rodrı́guez-Santana, S., Zaldivar, B., & Hernandez-Lobato, D. (2022, June). Function-space Inference with Sparse Implicit Processes. In International Conference on Machine Learning (pp. 18723-18740). PMLR.
[2] Mescheder, Lars, Sebastian Nowozin, and Andreas Geiger. "Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks." International Conference on Machine Learning. PMLR, 2017.
[3] Santana, S. R., & Hernández-Lobato, D. (2022). Adversarial α-divergence minimization for Bayesian approximate inference. Neurocomputing, 471, 260-274.
[4] Ma, C., Li, Y., and Hernández-Lobato, J. M. (2019). “Variational implicit processes”. In: International Conference on Machine Learning, pp. 4222–4233.
[5] Sun, S., Zhang, G., Shi, J., and Grosse, R. (2019). “Functional variational Bayesian neural networks”. In: International Conference on Learning Representations.
[6] Ma, C., & Hernández-Lobato, J. M. (2021). Functional variational inference based on stochastic process generators. Advances in Neural Information Processing Systems, 34, 21795-21807.
[7] Deng, Z., Zhou, F., & Zhu, J. (2022). Accelerated Linearized Laplace Approximation for Bayesian Deep Learning. Advances in Neural Information Processing Systems, 35, 2695-2708.
[8] Antorán, J., Janz, D., Allingham, J. U., Daxberger, E., Barbano, R. R., Nalisnick, E., & Hernández-Lobato, J. M. (2022, June). Adapting the linearised laplace model evidence for modern deep learning. In International Conference on Machine Learning (pp. 796-821). PMLR.
[9] Gneiting, T., & Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477), 359-378.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### Weaknesses
[This is meant as a response to the first 2 bullet points]
We thank the reviewer for their helpful suggestions. We will include the suggested references and give a thorough discussion in the final version of the manuscript. It allows us to contrast infinite-dimensional gradient-flow methods with infinite-dimensional parameter-space methods. The functions space literature focuses on a formulation of the problem in infinite-dimensional function spaces—but still uses a finite-dimensional gradient flow based on parameterisations to implement its algorithms. In contrast, our method operates on a finite-dimensional parameter space, but implements an infinite-dimensional gradient flow. The function space view has the benefit that the functional loss is often convex and consequently the target $Q^*$ can be unimodal. However, the variational stochastic process still requires parameterization to be computationally feasible—and in this sense, function space methods are FD-GVI approaches. The resulting objectives require a good approximation of the functional KL-divergence (which is challenging), and lead to a typically highly non-convex variational optimization problem in the parameterised space. While good initialization strategies may well be able to overcome some of these issues, this type of tuning requirement is common amongst FD-GVI approaches, a direct result of non-convexity, and typically not grounded in theory. Moving away from relying on these types of tuning strategies (whose effect is often poorly understood) serves as another motivation to attempt an infinite-dimensional gradient flow procedure: in principle, it allows us to exploit the convexity in the space of probability measures directly. The derived inference algorithms and the resulting asymptotic guarantees give some evidence to this conjecture: they hold regardless of the chosen initialisation.
That being said, we want to clarify that it is not our intention to question the validity of FD-GVI procedures or advocate for their abandonment: they are the dominant Bayesian deep learning paradigm for a reason, have considerable practical merits, and often work well in practice. We have mentioned and stressed this point more clearly in the new version of the manuscript. In spite of their practical utility, we do also believe that the mathematical challenges associated with FD-GVI serve as an additional motivation to investigate infinite-dimensional GD procedures. Indeed, doing this reproduced popular competing deep learning algorithms as diverse as DE and DLE—and even allowed us to derive DRLE, which provides a template for how our theory not only draws links btw existing deep learning algorithms, but can also inspire new ones.
#### Questions
1. LLA relies heavily on a function space perspective, which we did not adopt in our work. It is possible that a thorough investigation of function space methods would lead to further connections but we consider this beyond the scope of this paper.
2. [This paragraph answers Question 2 and Limitation 1] Thank you for raising this point—this is indeed a very important point. We have included the following in the final version of the manuscript:
> In essence, the core justification for these generalisations is that the very assumptions justifying application of Bayes’ Rule are violated in modern machine learning. In practical terms, this results in a view of Bayes’ posteriors as one—of many possible—measure-valued estimators $Q^∗$ of the form in (1). Once this vantage point is taken, it is not clear why one should be limited to using only one particular type of loss and regulariser for every possible problem. Seeking a parallel with optimisation on Euclidean domains, one may then compare the orthodox Bayesian view with the insistence on only using quadratic regularisation for any problem. While it is beyond the scope of this paper to cover these arguments in depth, we refer the interested reader to Knoblauch et al. (2022).
3. Again, the reviewer is pointing out a very interesting avenue for future research. In principle, it is possible to use other divergences such as the ones mentioned by the reviewer (and indeed all f-divergences). However, often the implementation of the gradient flow then requires us to have access to the evolved pdf of the samples at time $t$. This can generically be replaced with a kernel density estimator based on the samples available at time $t$. However, it is well-known that kernel density estimators suffer greatly from the curse of dimensionality. Therefore, it is practically infeasible to use them in the context of deep learning (as the parameter space over which we need to compute said kernel density estimators) is huge.
#### Limitations:
1. See Question 2.
2. We agree that the paper does not provide a full analysis of empirical performances. One reason for this is the limited amount of space. That being said, the more important reason is that we do not intend the paper to be a thorough study of any particular ‘market-ready’ methodology that can compete with the fine-tuned algorithms prevalent in industrial scale deep learning. Instead, our focus is on theoretical insight into what existing methods for uncertainty quantification in deep learning are actually doing—and this is reflected by our emphasis on connecting Bayesian and non-Bayesian methods through the analytical lens the paper proposes. As part of this, we conducted a range of experiments in the experimental section (incl. on UCI regression tasks) to showcase the limitations of naively adopting the current framework for designing new algorithms. In this sense, we believe that our empirical investigation serves the main purposes of the paper: to highlight how the derived theory aligns with reality, and to explain any divergence between theory and reality to help follow-up research with exploiting our ideas in order to develop more efficient and more effective algorithms.
---
Rebuttal Comment 1.1:
Title: Brief response to the rebuttal
Comment: I want to thank the authors for their insightful responses and detailed comments on the reviews, including mine.
After reading the rebuttals and going over parts of the article again, I'm really happy with what's been presented. I'm now even more convinced that we should accept this submission and I will update my review to reflect this. I consider this submission to be an interesting piece of work with important implications for future research.
Thanks again for the good work! | Rebuttal 1:
Rebuttal: ### General Response:
We want to thank all the reviewers for taking the time to read our manuscript so carefully and for providing valuable feedback that we believe will significantly improve the manuscript further. Overall, we have obtained a median score of 8, which is a wonderful reward for the countless hours that we have spent on this project and its technical content in the past year.
We summarize the reviewer’s main feedback below, and explain how we have addressed the points in the uploaded and updated version of the manuscript which implements some of the most called-for changes.
1. **Reviewer NdiF** has expressed a wish for a more thorough discussion of FD-GVI in our manuscript, and has raised concerns that our representation of parameterised variational methods may fall short; and **Reviewer 4EE1** raised a similar concern. We thank both reviewers for pointing out this oversight, and have added to the discussion on related literature in Section 2.2, which now includes a paragraph on the state-of-the-art on function-space inference, implicit variational inference strategies, and normalizing flows. We believe this discussion will further highlight that the manuscript’s intent is not to imply that FD-GVI methods are impractical; and that rather, its intent is to thoroughly highlight their conceptual shortcomings relative to ID-GVI methods (which are the main subject of our investigations).
**Reviewer 6hMj** has rightfully pointed out that a sentence regarding the origin of the studied optimization problem in the introduction needs some reshaping to avoid overstating the contribution in Knoblauch et al. (2022). We agree with the reviewer, and have adjusted this sentence.
2. **Reviewer NdiF** has raised some valid points regarding the interpretability and meaning of the non-Bayesian nature of the measure-valued estimators which we derive. We agree that such a discussion is useful and necessary—we have added it to Section 2.1.
3. Both **Reviewer 9edE and 4EE1** have asked questions regarding a quantitative analysis of the approximation error caused by finite time, finite samples and the use of unbiased estimators. We thank the reviewers for raising this, and have added the following discussion in Section 4.3:
> A notable shortcoming of Theorem 2 is its asymptotic nature. A more refined analysis would quantify how fast the convergence happens in terms of $N_E$, $T$, the SDE's discretisation error, and potentially even the use of unbiased estimators for the loss based on sub-sampling.
While the existing literature could be adapted to derive the speed of convergence for DRLE in $T$ (Ambrosio et al.,
2005, Section 11.2), this would require a strong convexity assumption on the potential $V$, which will not be satisfied for any applications in deep learning.
This is perhaps unsurprising: even for the Langevin algorithm—probably the most thoroughly analysed algorithm in this literature—no convergence rates have been derived that are applicable to the highly multi-modal target measures encountered in Bayesian deep learning (Wibisono, 2019; Chewi et al., 2022).
4. **Several reviewers (6hMj, NdiF, AtWA)** pointed out that relative to the rest of the manuscript, its experimental section is comparatively limited. We agree that the paper does not provide a full comparison of empirical performances between different deep learning methods. The most important reason for this is that we do not intend the paper to be a thorough study of any particular ‘market-ready’ methodology that can compete with the fine-tuned algorithms prevalent in industrial scale deep learning. Instead, our focus is on theoretical insight into what existing methods for uncertainty quantification in deep learning are actually already doing. This is reflected by the title and by our emphasis on connecting Bayesian and non-Bayesian methods through the analytical lens the paper proposes. So while we did conduct a range of experiments in the experimental section (including on UCI regression tasks), this was done to showcase the usefulness of our framework as well as the limitations of naively adopting it for designing new algorithms—rather than as evidence that the proposed framework can outcompete prevalent deep learning paradigms. In this sense, we believe that our empirical investigation serves the main purposes of the paper: to highlight how the derived theory aligns with reality, and to explain any divergence between theory and reality to help follow-up research with exploiting our ideas in order to develop more efficient and more effective algorithms.
We have also included a number of smaller changes listed below:
1. Adding to why the MMD can encode useful properties as a regulariser as contrasted with the KLD (included in 4.3 already; **Reviewer 9edE**).
2. Being more explicit about what we mean by ‘challenging space’ (included in 2 already; **Reviewer 6hMj**).
3. Explaining that all integrals are over the parameter space $R^J$ of $\theta$, and that similarly, all gradients are with respect to $\theta$ (included just before 2.1; **Reviewer AtWA**)
4. Inclusion of a further setting for one of the experiments (the one for Figure 3) to make the messaging around the number of particles
relative to the number of local minima clearer & more explicit (**Reviewer AtWA**; the resulting experiments are can be found in the attached PDF).
In addition to changes that are already implemented, we will include further adjustments in the camera-ready version as listed below:
1. Adjustments to make the experimental section more readable and to state the purpose of the experimental section more clearly; with scope depending on space available (**Reviewer 9edE**)
2. Including a definition of the Wasserstein distance (**Reviewer NdiF, AtWA**),
3. Mentioning the two key assumptions required for Theorem 1. (**Reviewer 4EE1**)
4. More explicitly mention why it is hard to solve the PDE in (4) (**Reviewer 9edE**)
Pdf: /pdf/125f1f118def11bdb3f9e775b0a0dc7d55d90c39.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper offers a viewpoint on deep ensembles as a (unregularized) Wasserstein gradient flow in the space of probability measures. This viewpoint enables new algorithms for deep ensembles (Langevin and repulsive via MMD), which are evaluated on some small datasets.
Strengths: 1) The paper is technically sound and well-written. Overall it was easy to follow.
2) While many similar ideas have been floating around in the literature, the precise presented view on deep ensembles seems novel, and I found Theorem 1 to be interesting.
3) While experiments on larger neural networks are missing, the effect of the proposed algorithms is clearly demonstrated in some controlled experiments and small data sets.
Weaknesses: 1) Perhaps the main weakness of the paper is the lack of a comparison of the new methods on large neural networks.
2) Many of the introduced tools (convexification via probabilistic lifting, Bayes with general divergence function, Wasserstein flows, etc.) are well-known. But I believe Theorem 1 and Theorem 2 offer some new insights (in case they are really correct, see Questions).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) In section 2, it is written that one lifts the problem to a more "challenging space" -- but I would instead say that this space is much simpler. The problem suddenly has a closed-form solution (Gibbs measure) that can be written down, one has convexity, etc.
2) The claim that the "infinite-dimensional regularised optimisation problem over the space of probability measures first introduced in Knoblauch et al. (2022)" seems rather quite bold -- the optimization problem (1) is a very fundamental one -- as discussed later in the paper (Section 2.1) there are many references. So perhaps this sentence in the introduction should be reshaped a bit?
3) In Theorem 1, is it really local minima or could it also be saddle-points? Couldn't a gradient flow in a nonconvex objective also get stuck at a saddle-point or local-maximum (when initialized at the maximum)? Imagine a landscape where we have a very large flat local maximum which has non-zero measure under that initial distribution. Is this somehow excluded in the assumptions?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: All limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### Weaknesses
1. We understand the reviewer’s concern and agree that the paper does not provide a full analysis of empirical performances. One reason for this is the limited amount of space that 9 pages allow in order to comprehensively present our firmly grounded theoretical framework and a number of its technical aspects. That being said, the more important reason is that we do not intend the paper to be a thorough study of any particular ‘market-ready’ methodology that can compete with the fine-tuned algorithms prevalent in industrial scale deep learning. Instead, our focus is on theoretical insight into what existing methods for uncertainty quantification in deep learning are actually doing—and this is reflected by our emphasis on connecting Bayesian and non-Bayesian methods through the analytical lens the paper proposes. As part of this, we conducted a range of experiments in the experimental section (including on UCI regression tasks) to showcase the limitations of naively adopting the current framework for designing new algorithms. In this sense, we believe that our empirical investigation serves the main purposes of the paper: to highlight how the derived theory aligns with reality, and to explain any divergence between theory and reality to help follow-up research with exploiting our ideas in order to develop more efficient and more effective algorithms.
2. We thank the reviewer for their comment. Indeed, we do not want to exaggerate our proposal’s novelty. While we agree with the reviewer that many of our contribution’s building blocks have been floating around in the literature, we are unaware of any work that has combined them in the way the current manuscript has. Bayesian (and generalized Bayesian) procedures typically focus on justifications that are centered around updating prior to posterior knowledge. Our focus is different: we instead focus on regularization in the space of probability measures, and on algorithms that allow us to solve the resulting problems. On a technical level, we also innovate by introducing WGFs for solving these types of problems. To the best of our knowledge, WGF has not previously been used to study how different regularisers translate into different properties of the inference algorithms; and how this in turn places inference algorithms as different as deep ensembles and variational Bayesian methods under the same overarching framework
#### Questions
1. Thank you for pointing this out; the sentence is indeed misleading in a certain sense: The initial space here is the Euclidean space and the ‘more challenging’ space refers to the space of all probability measures on the Euclidean space. From a point grounded in analysis, the latter is indeed more challenging as it provides less structure: Unlike Euclidean spaces, it is infinite-dimensional, non-linear and does not have an inner-product structure, which makes a thorough analysis more challenging. And yet, the reviewer is also correct to point out that the optimization problem itself is much simpler thanks to its convexity on this (more challenging-to-analyse) space. We intended to describe precisely this trade-off: We trade the simple Euclidean space with the more complicated space of probability measures in order to obtain an ‘easier’ convex objective function. We will slightly rephrase this sentence in the new version of the manuscript to stress this point more clearly.
2. Thank you very much for making us aware of this—you are absolutely correct: this sentence should not have been in the final manuscript. We will ensure that this part of the paper is fixed by providing the adequate context. If you believe there are additional relevant papers related to these types of problems that we should be citing but that are currently not contained in the manuscript, we would be very appreciative of you letting us know so that we can include them.
3. This is an excellent question. Two assumptions prevent this behavior: First, we assume that every saddle point has at least one strictly negative eigenvalue. Second, we assume that the Lojasiewicz inequality is satisfied (Lemma 3 in Appendix). Intuitively, the first assumption guarantees that locally around a saddle-point, the domain of attraction has Lesbegue measure zero. Hence, if our initialisation measure $Q_0$ has a Lesbegue density, we will almost surely not be attracted to the saddle point and leave its domain of attraction. The Lojasiewicz condition then guarantees that once we are in the domain of attraction of a local minimum, we will stay there and converge eventually to it. For more details see [1]. Regarding the reviewer’s specific scenario: A loss with a flat local maximum that has non-zero measure under the initialization would violate the Lojasiewicz inequality—as the reviewer correctly hypothesizes, this setting is therefore not covered by the theory.
[1] Lee, J. D., Simchowitz, M., Jordan, M. I., and Recht, B. (2016). Gradient descent only converges to minimizers. In Conference on learning theory, pages 1246–1257. PMLR.
---
Rebuttal Comment 1.1:
Comment: Thanks for all the detailed clarifications -- I understand now much better Theorem 1 and its assumptions. | null | null | null | null | null | null |
Direct Training of SNN using Local Zeroth Order Method | Accept (poster) | Summary: This paper proposes a Local Zeroth Order method, which can fit arbitrary surrogate functions by sampling a group of variables from a certain distribution. Experiments have verified the superiority of the proposed scheme.
Strengths: 1. The authors‘ idea about fitting arbitrary surrogate functions by sampling is very novel.
2. Theoretical analysis about ZO function is persuasive and profound.
3. Relevant experiments, especially on ResNet-19, can show the obvious advantages of ZO function.
Weaknesses: 1. If time permits, I suggest that the authors can supplement their experiments on large-scale datasets (e.g. ImageNet).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. I note that the firing threshold used by the authors in the code seems to be a fixed value of 1, so has the expected threshold proposed by the authors in Section 4.5 been used in actual experiments?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I think that the proposed ZO function may require more extensive experiments, especially on large-scale datatsets. Overall, I think this paper have met the acceptance standard.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W:** If time permits, I suggest that the authors can supplement their experiments on large-scale datasets (e.g. ImageNet).
**A:** Due to the time constraint, we were not able to run our experiments on the full ImageNet dataset, but we could run them on the Imagenet-100 dataset, which has the same dimensionality as Imagenet. On such dataset, we confirmed that LocalZO is comparable in test accuracy to the standard surrogate. Please refer to our answer to reviewer cVHq for full details on this experiment (such details will also be added in the final version of our paper).
**Q:** I note that the firing threshold used by the authors in the code seems to be a fixed value of 1, so has the expected threshold proposed by the authors in Section 4.5 been used in actual experiments?
**A:** Actually, throughout the paper we ambiguously use the same term ``threshold'' for two different values. One is the firing threshold ($u_{th}$) for the membrane potential in order for a neuron to spike (which in the code is set to 1), the other has been the threshold for the backpropagation ($\tilde{B}_{th}$) of gradients, to which the results of Section 4.5 pertain. These thresholds were used in the experiments where we compare our method with SparseGrad method.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors to address all my concerns, especially regarding the clarification on the term threshold used in the paper, now this is clear to me. For the experiments, due to the time limit, I am satisfied that the authors provide more results on more datasets like the Imagenet-100 dataset.
In summary, I am happy to the response and would like to increase my rating from 7 to 8. | Summary: The authors propose a new method for training spiking neural networks
(SNNs). To estimate the gradient of the step function for spike
generation, they propose to directly estimate the gradient by local
sampling around the point of derivation and averaging the linearly
calculated slopes, a method that is known in other fields as
zeroth-order estimation of the gradient of an arbitrary (potentially
non continuous) function. They show that, if the distribution used for
sampling is chosen appropriately, the approximated gradient approaches
in expectation the common surrogate gradient approach, where an
arbitrary surrogate function is used for smoothening the gradient of
the step function. They further show how to choose the sampling
distribution for a couple of given surrogate functions. Finally, they
show that their approach outperforms previous methods on common
benchmark tasks in terms of accuracy. In particular, gradient
calculation is sparser (since gradients are deemed zero for larger
sampled values) in particular if only few values are sampled, and thus
more efficiently to compute (unless one truncates the surrogate, which
performs similarly in this regard).
Strengths: The paper makes a very interesting connection of the surrogates used
to train SNNs to a sampling based zeroth order method, and shows that
and how (in theoretical depth) samples can be generated to approximate
any surrogate used in literature.
While the method does not significantly changes the state-of-the-art,
since the SparseGrad method (that truncates the surrogates to have a
sparser backward pass) seem to perform very similar in accuracy
as well as compute efficiency, it nevertheless provides a very
interesting theoretical underpinning of how sparseness can be
introduced into the backward pass.
The paper is also very well written, clearly motivated and
provides clear mathematical proofs of their claims.
Weaknesses: While the theoretical connection is interesting it could be argued
that the existing SparseGrad method performs similarly in practice, so that there is not much of an improvement of the state-of-the-art. However, still, the accuracy seems to be slightly improved due to the
introduced randomness that apparently leads to slightly better
generalization.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The general approach is motivated by dropout, where activity is
randomly changed in the forward pass. However, the authors actually
"apply" this principle only for the backward pass (as demanded by
the zeroth-order approach). Since $m=1$ for most cases in the
experiments, I wonder how results would change if spikes in the
forward pass would be also randomly generated (and not only the
(pseudo) backward spikes), in other words, when the pseudo-backward
spikes where actually also the forward spikes. This would be similar
to neuron with random spike generation, which actually is biologically
very realistic. The randomness in the forward pass would effectively
smooth over the Heaviside function as well. However, maybe in practice
the activity profile of the SNN would be impacted too much. That might
be an interesting discussion point.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: It would have been interesting to discuss / compare the methods with SNNs that use some form of random spike generation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W:** While the theoretical connection is interesting it could be argued that the existing SparseGrad method performs similarly in practice, so that there is not much of an improvement of the state-of-the-art. However, still, the accuracy seems to be slightly improved due to the introduced randomness that apparently leads to slightly better generalization.
**A:** As suggested by one of the reviewers, we further tested our method on several neuromorphic datasets that we did not consider previously. We hope that these new results emphasize even further the advantage of LocalZO method compared to surrogate gradient. Please refer to our reply to the reviewer cVHq for the results and experimental setting.
**Q:** The general approach is motivated by dropout, where activity is randomly changed in the forward pass. However, the authors actually "apply" this principle only for the backward pass (as demanded by the zeroth-order approach). Since for most cases in the experiments, I wonder how results would change if spikes in the forward pass would be also randomly generated (and not only the (pseudo) backward spikes), in other words, when the pseudo-backward spikes where actually also the forward spikes. This would be similar to neuron with random spike generation, which actually is biologically very realistic. The randomness in the forward pass would effectively smooth over the Heaviside function as well. However, maybe in practice the activity profile of the SNN would be impacted too much. That might be an interesting discussion point.
**A:** This is a rather curious point and one that we considered and plan to consider. As you pointed out, a neuron which is a random spike generator is biologically plausible concept, but also, on the ANN side of the story, there are results which show that ReLU or Leaky ReLU with slopes sampled from some distribution during training, while during inference one takes the expected slope, show improved performance over their respective rigid versions.
To introduce randomness in the forward pass along the above lines, one can simply, at each time step, randomly sample threshold, i.e. $u_{th}\sim \lambda$, where $\lambda$ is some distribution ideally with finite support contained in positive real numbers. Already here, there is a choice for what is the output of the neuron. For example, first natural choice is $H(u[t]-u_{th})$ ($H$ being the Heaviside function), while in the backward pass one uses the any surrogate applied to the input $|u[t]-u_{th}|$. First tests of this choice (VGG16, CIFAR10 dataset (no data augmentation), 60 epochs) show that the performance is comparable to that of the plain setting, but we noticed that the train and test accuracy are kept close to each other (while in the plain setting the network tends to overfit early on), so one may suspect that even in this simple setting the model is able to generalize well whatever it learned (of course, this is a rather simple experiment and one should not take any results or conclusions as definitive).
A more interesting, in our modest opinion, is the situation where the neuron outputs $u_{th}*H(u[t]-u_{th})$. As you put it, the randomness in the forward pass would effectively smooth over the Heaviside function as well, seems to be satisfied in this situation. Moreover, all sorts of new phenomena arise. One can study the probability of a neuron (having fixed membrane potential) to fire, the expected output of a neuron, distribution of outputs. At the same time, in the backward pass one faces some interesting choices for the surrogate gradients. If we follow the settings of the previous experiment and use some fixed surrogate, it seems that the network is learning slightly worse than in the other setting (with similar generalization property). But, it also seems that there should be a natural choice for the surrogate, which, intuitively speaking, should depend on the two dimensional distribution of ``neuron membrane potential - neuron output'' (the situation should however be compared with the setting of probabilistic spiking neural networks, c.f. https://arxiv.org/pdf/1910.01059.pdf).
In conclusion, we firmly agree with your suggestion of introducing the randomness in the forward pass, and moreover to introduce it in such a way that the backward gradient becomes ``naturally apparent'' rather than becoming a choice. The potential elegance of this situation requires a detailed understanding and extensive experiments, that we hope to pursuit in the future.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed responses. | Summary: This paper proposes a new direct training algorithm for SNN, combining the standard surrogate methods and zeroth order method together. The algorithm applies the 2-point zeroth order method on the Heaviside function to generate a surrogate gradient, which is more efficient. The author applied his method to various dataset such as CIFAR-10 and CIFAR-100 and outperforms the SOTA methods.
Strengths: 1. Due to the efficiency of zeroth order method, this algorithm is computational friendly, reducing the computational burden significantly. The gradients are only back propagated through active neurons.
2. According to theorem 3, the algorithm is able to simulate arbitrary surrogate, which means this algorithm has great expression capability.
Weaknesses: 1.This paper has limited novelty. It seems simply to be a combination the forward gradient method [1] and sparse gradient together [2].
2.Worry about the scalability of the proposed method, as forward gradient method is not widely applicable to large networks [1].
3.The experiments only show marginal improvements.
[1] SCALING FORWARD GRADIENT WITH LOCAL LOSSES. ICLR 2023
[2] Sparse Spiking Gradient Descent. NeurIPS 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How to choose the distribution function of z for different tasks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No. The author should discuss the limitation of the scalability of the forward gradient method in large network.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** This paper has limited novelty. It seems simply to be a combination the forward gradient method [1] and sparse gradient together [2].
**A1:** We try to elaborate on the motivation and technical implementation of our work, hoping to be more clear why our method is not simply a combination of two methods you mentioned.
1) Loosely speaking, forward gradient method is used to estimate the updates of the weights of the network by observing the change in the outputs of the network with respect to the perturbation of the weights. In a variant of this general principle, the authors in [1] use somewhat local version, where the gradients of the weights are estimated by the perturbation of the activations of the subsequent postsynaptic neurons and learning is performed in combination with local losses.
Contrary to these principles, zeroth-order in our method stems from a different motivation and is focused solely on the spiking neuron's activation function (i.e. the Heaviside function) and its gradient. In general, one uses a fixed surrogate gradient, i.e. a function which will serve in the backward pass in place of the non-existent (in the classical sense) derivative of the Heaviside function. For us, the surrogate function changes from time step to time step, and from neuron to neuron, depending on the ``activity'' of the neuron itself (please refer to the motivation section of our paper for more details and interpretation of the activity of the neuron). Also, in our work the training is done fully end to end using a loss on the output.
2) Sparse gradient method [2] proposes to use the surrogate gradients with compact support, demonstrating its advantages in the energy efficiency, as in general there will be abundance of zero gradients that will not be passed in the backward pass. However, once the surrogate function is chosen, it is fixed throughout the training, and although comparable to using surrogate gradients which have non-compact support, it is slightly lagging behind when it comes to performance.
The surrogate gradients that are present in our method have compact support at each time step, potentially offering the benefits of sparsegrad method. However, we do not fix upfront this surrogate function, but rather it changes from time step to time step, in a somewhat random but controllable way: Random, because it depends on a random sample from a distribution and controllable because the distribution is fixed.
The method we propose is more than sum of its basic parts. Motivated by the effect of randomness-based regularizer such as dropout, our goal was to introduce a direct training method which will have regularizing effects due to some introduced randomness, which will offer the benefits of the sparsegrad method, but at the same time having the ability to simulate both non-compactly and compactly supported surrogate functions, hence keeping the best from both worlds (performance and potential energy efficiency).
3) Next, we provide both theoretical and practical framework for our method, further establishing its soundness and validity. Our Theorems 2. and 3. provide close connection between distributions that are intrinsic to our method and surrogate functions that are obtained in the expectations showing that LocalZO can be used as a substitute for any surrogate function that is used in SNN literature. You may refer to Section 4.4 for the applications of these results for some of the more commonly used surrogates.
**W2:** Worry about the scalability of the proposed method, as forward gradient method is not widely applicable to large networks [1].
**A2:** As we only apply forward gradient locally, at neuronal level to provide the derivative of the Heaviside function, there is no problem in applying the method to deeper or larger networks. For example, we trained on Imagenet-100 and confirmed that LocalZO is comparable in test accuracy to the standard surrogate. Please refer to our answer to reviewer cVHq for full details on this experiment (such details will also be added in the final version of our paper).
**W3:** The experiments only show marginal improvements.
**A3:** We tested LocalZO method, as suggested by one of the referees on several other neuromorphic datasets. The method demonstrates its advantage in generalization performance compared to the standard surrogate training. Please refer to our reply to the reviewer cVHq, where we present newly obtained results and the experimental setting.
**Q:** How to choose the distribution function of z for different tasks?
**A:** This is an interesting and, if we may say so, a difficult question. To the best of our knowledge, there is no systematic study of performances of surrogate functions on different tasks, and in our experience in the literature one can find all sort of (reasonable) functions yielding trained models that perform well on various task and datasets (and in fact, different surrogate functions performing well on the same tasks). On the other side, the reviewer may take a look at the paper by Y. Li et al. "Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks" where a somewhat related question has been addressed.
[1] SCALING FORWARD GRADIENT WITH LOCAL LOSSES. ICLR'23
[2] Sparse Spiking Gradient Descent. NeurIPS'21.
---
Rebuttal Comment 1.1:
Title: still have some conerns
Comment: I appreciate the authors' feedback. I still have some concerns about the paper.
1. Efficiency vs generalization. The proposed method introduces randomness to improve generalization. However, it's highly related to the number of samples of z, while large number of samples may impair computational efficiency. Could authors show a study of different number of samples used in the algorithm for both accuracy and computational cost, so that we can see the separate effect of local forward method and randomness?
2. Scalability of the method. As shown in the rebuttal to reviewer cVHq, the method only gains marginal improvement or even worse result on Imagenet-100 dataset.
3. Lack details of the adaptivity of the zero-order method. I appreciate the theoretical analysis that the proposed method can approximate arbitrary surrogate functions. However, it is not clear to me that the algorithm how to adaptively select the distribution z during training, especially there is no closed form for the expected surrogate function.
---
Reply to Comment 1.1.1:
Comment: **Q1** Efficiency vs generalization.
**A1** We provide additional details of the LocalZO method from Table 3 of the paper. The table below shows the test accuracy of the LocalZO method, and overall speedup compared to the surrogate for $m \in {1,3,5,7,10, 20, 100}$, with z sampled from Gaussian distribution, averaged over 5 experiments. We also report the accuracy of the corresponding Gaussian surrogate.
In general, by increasing m, the method approximates the surrogate better, still offers the regularizing effect and potentially improves the generalization, but also requires more computation. Larger m also leads to more non-zero gradients at the neuronal level in the backward pass, which leads to reduced overall speedup. On the other hand, smaller m introduces higher randomness (less “controlled”) still yielding regularization, which helps obtain better generalization, but as well as the potential speedup.
In conclusion, m should be treated as a hyper-parameter, its value depending on the training setting itself. In our experiments, we chose m = 1 or 5 for most of the experiments, as a proof of concept, but also because it offers a nice balance between the speed-up as well as performance. However, here is a more complete table where one can see the effect of m, both in terms of speed-up and accuracy.
NMNIST , Surrogate Acc: 93.70, std: 0.17
|m|1|3|5|7|10|20|100|
| ---- | ---- | ---- | ---- | ---- | ---- | --- | --- |
| Acc: | 93.29 | 93.61 | 93.69 | 93.66 | 93.76 | 93.67 | 93.81 |
|Std: | 0.08 | 0.15 | 0.17 | 0.13 | 0.14 | 0.08 | 0.14 |
| Speedup |3.33|3.28|3.22|3.16| 3.06 | 2.82 | 1.59 |
SHD, Surrogate Acc: 75.47, std: 0.69
|m|1|3|5|7|10|20|100|
| ---- | ---- | ---- | ---- | ---- | ---- | --- | --- |
| Acc | 76.55 | 76.55 | 76.50 | 75.49 | 75.51 | 74.96 | 76.71 |
| std. | 0.93 | 0.65 | 0.90 | 0.66 | 0.81 | 0.68 | 0.49 |
| Speedup | 4.75 | 4.62 | 4.47 | 4.39 | 4.25 | 3.89 | 2.24 |
FMNIST, Surrogate Acc: 83.35, std: 0.16
|m|1|3|5|7|10|20|100|
| ---- | ---- | ---- | ---- | ---- | ---- | --- | --- |
| Acc | 81.79 | 83.40 | 83.64 | 83.70 | 83.85 | 83.75 | 83.87 |
| std. | 0.06 | 0.06 | 0.12 | 0.04 | 0.11 | 0.11 | 0.05 |
| Speedup | 1.89 | 1.85 | 1.78 | 1.75 | 1.70 | 1.56 | 0.88 |
Additionally, to evaluate quantitatively the quality of the LocalZO estimator improves with the number of random samples $m$, we provide below a plot of the mean and standard deviation of the distribution of such an estimator, for several values of $m=1, 5, 10, 20$, where the LocalZO estimator is sampled $10^5$ times.
More precisely, we consider the Gaussian ZO estimator with $\delta=0.5$, and for each coordinate $u$ of the input to the Heaviside (from a grid of 120 values from -3 to 3), we compute $10^5$ samples $g_i(u)$, $i=\{1, …, 10^5\}$ of the localZO estimator of the gradient as:
$g_i(u) = \\frac{1}{m} \sum_{j=1}^m G^2(z_j)$, with $G^2(z) = \begin{cases}
\\frac{|z|}{2 \\delta} ~\\text{if} ~|u| \\leq |z| \\delta,
0 ~ \text{otherwise}
\end{cases} $ (from eqn. [5, 6]), and $z_1, …, z_m$ are i.i.d. samples from a standard normal distribution. We then report the mean (as the main curve) and standard deviation (as the shaded area) of the samples $\{g_1(u), … , g_{10^5}(u)\}$, for each value of $u$.
As we can observe, after already m=5 samples the standard deviation becomes reasonably low, and it decreases as we progressively increase $m$.
https://anonymous.4open.science/r/rebtalneurips-D01C/
**Q2** Scalability of the method.
**A2:** It is known that using regularizers such as Dropout require longer training or larger models for their advantages to take effect (see for example references [1, 2]]). As LocalZO method incorporates randomness as a regularizer, the time it takes to show advantages varies on how large the dataset is, or how big the network is. For smaller scale datasets, we have shown in our experiments that the advantages over surrogate are visible after a lower number of epochs. However, for larger datasets, such as Imagenet-100, we need to train for longer epochs to show advantage.
Having in mind our discussion on usage of m in our answer to your first question, we provide the results after training LocalZO on Imagenet-100 dataset as well as surrogate, for 300 epochs, with batch size 72.
| | Surrogate | LocalZO |
| ---- | -----: | -----: |
| ImageNet-100 | 81.23 | 83.33 |
As we can see, this time, the advantage of our method is much more emphasized compared to the results previously reported. We use ImageNet policy with standard augmentation, and m=20 for LocalZO.
[1] Goodfellow, I., Bengio, Y., and Courville, A. Deep learning. MIT press, 2016.
[2] Hernández-García, A., & König, P. (2018). Data augmentation instead of explicit regularization | Summary: This paper presented a direct SNN training algorithm that alleviates the loss of the gradient information and improves the performance of the SNN on multiple datasets, including both static image datasets and dynamic vision datasets.
Strengths: - Rigorous theoretical and empirical analysis to justify the necessity of the Zeroth Order technique.
- Comprehensive experimental results with improved performance against the previous SoTA method.
Weaknesses: - The proposed method is only verified on small-scale datasets such as CIFAR-10/100 and DVS-CIFAR-10. Since the proposed method shows improved performance on simple vision tasks, it is necessary to further verify the performance on large-scale datasets such as ImageNet-1K or ImageNet-100.
- I understand that static CIFAR datasets are standard vision tasks in most of the recent direct SNN training methods. However, one of the major advantages of SNN is the ability to process the spatial temporal visual information which widely exists in the captures of the event-based sensor/camera. Solely verifying the proposed method with the DVS-CIFAR10 dataset is insufficient, it is very useful for the research community if the author can provide the performance with more DVS datasets (e.g., IBM gesture, NCARS, N-CalTech101), which was reported by [1].
[1] AEGNN: Asynchronous Event-based Graph Neural Networks, CVPR'22.
- I wonder if the proposed method is applicable to the SpikeFormer [2]?
[2] Spikformer: When Spiking Neural Network Meets Transformer, ICLR'23
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The major limitation of the paper is the insufficient experimental results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** The proposed method is only verified on small-scale datasets such as CIFAR-10/100 and DVS-CIFAR-10. Since the proposed method shows improved performance on simple vision tasks, it is necessary to further verify the performance on large-scale datasets such as ImageNet-1K or ImageNet-100.
**A1:** We perform experiments of ImageNet-100 dataset using an integrate and fire (IF) SEW-Resnet34 model [7] with ZerO initialization [5]. We compare LocalZO (m=5, $\delta$=0.5) with the corresponding Gaussian Surrogate, both implemented using TET loss, trained for 200 epochs. The experiments are performed with standard augmentation (RandomResizedCrop(224),
RandomHorizontalFlip), **with** and **without** ImageNet Policy [8]. The obtained top-1 accuracy is comparable to the SOTA results [6], and exceeds it with the presence of ImageNet Policy. We do not observe any significant loss of accuracy when compared to the surrogate method.
| Datasets | Surrogate | LocalZO | Surrogate+Aug | LocalZO+Aug |
| :------------- | :-------------: | :---------: | :------------------: | :------------------: |
| Imagenet-100 | 78.38 | **78.58** | **81.58** | **81.56** |
**W2:** I understand that static CIFAR datasets are standard vision tasks in most of the recent direct SNN training methods. However, one of the major advantages of SNN is the ability to process the spatial temporal visual information which widely exists in the captures of the event-based sensor/camera. Solely verifying the proposed method with the DVS-CIFAR10 dataset is insufficient, it is very useful for the research community if the author can provide the performance with more DVS datasets (e.g., IBM gesture, NCARS, N-CalTech101), which was reported by [1].
**A2:** We consider datasets suggested by the reviewer and compare results of LocalZO vs. Surrogate, where LocalZO uses the standard Gaussian distribution to sample z (m=5, $\delta$=0.5), and the Surrogate method uses the corresponding Gaussian surrogate as obtained in eqn. (9). We perform the experiment with TET loss vs. plain cross-entropy loss accompanied with tDBN, following the experimental settings in the paper.
The events of the neuromorphic datasets are collected into event frames of dimension (2 $\times$ H $\times$ W), and the number of frames (a.k.a. number of bins) are considered as the temporal dimension for SNN. We set the number of frames to 10, and they are resized to dimension ($2 \times 48 \times 48$) for all the neuromorphic datasets. It is the same pre-processing step we followed to obtain DVS-CIFAR-10 results.
We perform the experiments under two settings, **with** and **without** data augmentation, using the VGGSNN architecture reported in the paper. Under data-augmentaion we use the standard augmentation technique of RandomCrop(48, padding=4) and RandomHorizontalFlip. We train the models from scratch for 200 epochs (batch-size 64 for DVS-Gesture, and 16 for N-Caltech and NCARS), while the other training hyper-parameters remains the same as previously reported in the paper .
For DVS-Gesture dataset (a.k.a. IBM-Gesture) we obtain the top-1 accuracy 98.43\% which is comparable to the state-of-the-art accuracies (98\%) reported for this dataset [1]. In N-Caltech-100 dataset we obtain top-1 accuracy of 82.99\%, which is higher than the reported SOTA(81.7\%) that uses transfer learning on SEW-ResNet-34 model pre-trained with Imagenet [2]. For NCARS dataset we obtain top-1 accuracy 96.96\% which is again higher than SOTA of 94.5\% [1]
The comparison with the corresponding Gaussian surrogate shows that the randomness introduced by LocalZO frequently helps the training to obtain better generalization performance. The better generalization by LocalZO holds irrespective to the randomness introduced by data-augmentation, and irrespective to the accompanying tDBN or TET method.
We thank the reviewer for suggesting DVS datasets, we shall include these results in the future version of the paper.
| Datasets | Loss | Surrogate | LocalZO | Surrogate+Aug | LocalZO+Aug |
| :----------- |:------: | :-------------: | :---------: | :------------------: | :------------------: |
|DVS-Gesture | TET | 94.9 | **98.04**| 96.09 | **98.43** |
| | tDBN | 87.89 | 95.31 | 92.97 | 91.41 |
| N-Caltech-101 | TET | 67.24 | **79.86** | 76.04 | **82.99** |
| | tDBN | 68.4 | 74.65 | 75.58 | 79.05 |
| N-CARS |TET | 95.42 | **96.78** | 95.09 | **96.96**|
| |tDBN | 94.06 | 95.96 | 94.83 | 95.68 |
**W3:** I wonder if the proposed method is applicable to the SpikeFormer [2] ?
**A3:** Spikeformer uses derivative of the sigmoid function as the surrogate gradient for the Heaviside function (see Appendix C1 of [2]). On the other side, LocalZO method is able to simulate the derivative of the sigmoid function (see Appendix A.2.1 of our paper), and as such it is applicable to the Spikeformer. Although it would be a very interesting application of our method that we are happy to test in the future, it goes beyond of the present scope of our paper.
[1] AEGNN: Asynchronous Event-based Graph Neural Networks, CVPR'22.
[2] Spikformer: When Spiking Neural Network Meets Transformer, ICLR'23
[3] "Sequence approximation using feedforward spiking neural network for spatiotemporal learning: Theory and optimization methods." ICLR'22.
[4] "End-to-end learning of representations for asynchronous event-based data." ICCV'19.
[5] “ZerO Initialization: Initializing Neural Networks with only Zeros and Ones” , TMLR'22
[6] "Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking Neural Networks." arXiv:2305.18360 (2023).
[7] "Deep residual learning in spiking neural networks." Neurips (2021)
[8] “AutoAugment: Learning Augmentation Policies from Data” arXiv:1805.09501 (2019)
---
Rebuttal Comment 1.1:
Title: Well-received rebuttal
Comment: The additional experimental results presented by the author further prove the performance of the proposed method. It will be great if the author could include these comprehensive experimental results in the next version of the paper.
To that end, I will increase my score from 5 to 6. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Spatially Resolved Gene Expression Prediction from Histology Images via Bi-modal Contrastive Learning | Accept (poster) | Summary: The paper proposes learning a contrastive learning based joint embedding space to align gene expressions and histology. This expression space is used to generate expression predictions for queried patches from the histology modality. The paper shows improved correlations in predicting gene expressions as well as better measure of expression heterogeneity when compared to HisToGene and ST-Net, two popular supervised regression-based methods for gene expression prediction.
Strengths: 1. The paper propose a new approach for the problem of predicting gene expression through histology. The problem space is important since sequencing is still expensive, not always available, and its relationship with histology is ripe ground for research in multimodal analysis.
2. The paper seems to be the first one to predict gene expression using bi-modal alignment (although CCA based alignment has been done previously). The reasoning for choosing an alignment-based method is well motivated. The contrastive learning method is simple and effective, and probably overfits less than a direct regression-based method.
3. The correlation results show significantly improved performance on the task of gene expression prediction. Moreover, the results in Figure 2 highlight the very important characteristic of the method not losing out on the heterogeneity information of the genes.
[1] Ash, Jordan T., et al. "Joint analysis of expression levels and histological images identifies genes associated with tissue morphology." Nature communications 12.1 (2021): 1609.
Weaknesses: 1. The results show correlations of the reference and predicted gene expressions. However, there's no metrics to show how well the method does for spatially resolving these predictions. The results are also shown on a single dataset.
2. The contrastive learning framework for multimodal problems isn't novel in itself. The smooth loss too has been formulated in different ways in previous work [1, 2, 3]. Thus the main contribution of the paper is its formulation and application to the task of gene expression prediction. The paper is predominantly an application paper, therefore more extensive ablations or more datasets would have been more convincing. The results shown however do show improvements over previous baselines.
3. The paper doesn't motivate choices like smooth contrastive loss, k-nearest neighbor based selection, linear combination to get the imputed genes etc. through empirical ablation experiments.
4. The paper shows the differences between using a fixed scale vs variable scale for spatially resolved predictions. It's useful to motivate why the absolute values for these predictions matter and looking at fixed scale is the way to go.
5. The gene-gene correlation is interesting because HisToGene seems to be closer to the original expression data than the proposed method. The authors note that the method appears to accentuate certain positive and negative correlations, but it's not clear if this is desirable. Similar arguments about the clustering experiment can be made as well where HisToGene seems to do better (although I agree that this isn't a great measure of prediction quality).
[1] Denize, Julien, et al. "Similarity contrastive estimation for self-supervised soft contrastive learning." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023.
[2] Zheng, Mingkai, et al. "Ressl: Relational self-supervised learning with weak augmentation." Advances in Neural Information Processing Systems 34 (2021): 2543-2555.
[3] Wei, Chen, et al. "Co2: Consistent contrast for unsupervised visual representation learning." arXiv preprint arXiv:2010.02217 (2020).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Which correlation measure is used? Please specify.
2. It's not clear how the plot in Figure 2 was obtained. Is it the mean and variance for each gene across all slides in the test dataset? Why are there multiple points for the predicted expression profile? Some details on this section would be helpful for readers.
3. Can you please share your thoughts on points 1, 3, 4, 5 in the previous section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I believe the authors have shared some limitations in various sections including some seemingly negative results (which is applaudable!). Would really appreciate a commentary on when an alignment based method is expected to do better than a direct regression method and vice versa and explicitly mention some limitations and potential future directions of exploration in this area.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for these comments. We summarize the major points below along with our rebuttal to each point.
Correlation Measure:
- We used the Pearson correlation coefficient for measuring both prediction-GT correlation and gene-gene correlation.
Clarification on Figure 2:
- Each point on the plot refers to a single gene, whose value reflects its normalized mean or variance across a test sample slice. Blue dots correspond to the ground truth values, while the orange dots represent the inferred values. The gene rankings are performed based on ground truth values in increasing order.
Metrics for spatially resolving the predictions:
- We provide the qualitative outcomes in Figure 3 and Supplementary Figure 1 to demonstrate BLEEP's proficiency in spatially resolving these predictions. BLEEP generates meaningful unsupervised clusters that align with the periportal and pericentral regions of the sampled liver tissue.
- For a quantitative evaluation of BLEEP's clustering, we reference the Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI) metrics in Supplementary Table 2. However, due to the continuous gradient of biological variation, the selection of the exact clustering method and its parameters significantly influences the definition of discrete clusters. Therefore, involvement of domain experts to annotate ground truth regions on the H&E image is necessary for a more comprehensive and robust assessment of spatial resolution and planned for future work
Motivation and empirical experiment for using smooth contrastive loss, knn, linear combination for imputation:
- Please see rebuttal addressed to all reviewers for ablation studies and discussion
Fixed scale vs. variable scale:
- The importance of fixed scale vs variable scale could be split into two components. Fundamentally, Visium expression data measure the discrete number of transcripts captured that map to each gene. The absolute values in the ground truths provided are therefore meaningful and should be reproduced by any predictive model in terms of both mean and variance. Practically, one may argue that relative expression irrespective of variance is sufficient information for downstream use cases particularly when the counts measured via Visium are themselves subject to a stochastic sampling process during the experiment. However, predicting absolute expression allows meaningful comparisons in magnitude across different genes, which contain richer information than z scores (as each gene has different mean and variance). Furthermore, a fixed scale enables direct comparisons of expression levels across different H&E slides. This approach gives an absolute frame of reference. Visualizing predictions in fixed scale benchmarks the methods’ ability to retain these types of information.
Lastly, both His2Gene and ST-Net were trained on the same data as BLEEP. The former two approaches’ inability to learn the variance of the held out dataset is a bug not a feature, and reflect shortcomings in the method.
Concern regarding the gene-gene correlation heatmap:
- The gene-gene correlation heatmap from HisToGene does seem more similar to the original expression. This is likely attributed to BLEEP’s averaging effect during the imputation process. As discussed in the paper, the Visium experimental platform is susceptible to sampling noise. Averaging the expression levels across the top K most similar spots to any query image patch may serve to average out some of this noise and uncover additional biological meaningful signals akin to pseudo-bulking the top K most likely expression profiles for any given query image patch. Supporting this observation, we draw attention to the high and low values in the original gene-gene correlation heatmap. The same correlation patterns are well represented in BLEEP’s heatmap, but just more accentuated, similar to what one would expect from pseudo-bulking. By learning a joint embedding between image and expression, BLEEP is capable of identifying suitable spots to perform pseudo-bulking in a non-biased way and we argue this is desirable for downstream biological inquisition. Nevertheless, such averaging effects may also mask out some genuine biological signals. This limitation is included in the manuscript and requires further investigation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I will keep my original rating of a weak accept. | Summary: The authors present, BLEEP, a bi-modal embedding framework capable of generating spatially resolved gene expression profiles of whole-slide H&E stained histology images. This work stems directly from spatial transcriptomics, where we have spatially map gene expression profiles with H&E images. A well descripted tasks is predicting these gene expression profiles from the H&E images alone. This is the task the author set out to address and compare their results with several related methods. Their technical contribution was to apply CLIP-style training plus 'query-reference imputation' to predict expression profiles, rather than relying on disciminative learning alone. They were able to show results that outperform previously reported methods.
Strengths: Strength:
1) The authors introduce the topic well. They do a good job discussing previous work.
2) The paper is well written.
3) The figures are clear and instructive.
4) This is a potentially important biomedical computer vision task.
5) The smoothed target and the query look-up strategy is a novel and intuitive contribution to this task.
Weaknesses: Weaknesses:
1) The paper is, overall, underdeveloped. They have made a small technical contribution and applied this to a very small and narrow domain dataset (normal hepatic tissue). I believe that small technical contributions have an essential place in biomedical AI research, but they must be accompanied by a very strong validation across organ systems, tumor types, platforms, etc. I understand that the availability of these datasets is limited, but the authors have limited their contribution to a single dataset for no clear reason. I would recommend additional experiments in at least two other organ systems, preferably a disease dataset, to demonstrate the generalizability of this method.
2) Minor comment: how are the 'pairs with similar morphological features or expression landscapes' selected? The authors do not specify how these pairs are selected in order to generate the smoothed targets.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No, there was not a clear description of the limitations or societal impact of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for these comments. We summarize the major points below along with our rebuttal to each point.
Additional experiments:
- While our paper focuses on a single organ system, this choice was made to deeply explore and validate our approach to demonstrate the effectiveness and applicability of the proposed method. There are no technical barriers to extend BLEEP to other organs. Ongoing internal experiments on brain tissue has shown promising results of BLEEPs generalization capacity. We are open to including the results of our experiments in the camera ready version if deemed necessary by the reviewer. In line with your recommendation, we are currently building foundation models that span multiple organ systems, and it will be released as future work.
Pairs with similar morphological features or expression landscapes:
- In our approach, we don't explicitly "select" these pairs but rather calculate an internal similarity matrix. This is done for both the morphological features ($H_v$) and the expression landscapes ($H_x$). The internal similarities, $sim(H_v, H_v)$ and $sim(H_x, H_x)$, are computed as the dot product of the feature matrix with its transpose, representing the similarity between every pair of samples. The resultant similarity-adjusted target matrix ($target=(softmax(sim(H_x, H_x) + sim(H_v, H_v) )/2 \cdot \tau)$) allows us to account for the inherent similarities in the data, smoothing the targets based on these similarities.
---
Rebuttal Comment 1.1:
Comment: To further clarify and provide an update regarding your comment on our limited evaluation, we are actively working to collect more data for testing. However, we need high/full resolution H&E images, as captured by the experimental imaging system (TIFF files as large as 8GB), and we have discovered that only low resolution images are typically shared, forcing us to contact the authors of many studies to access the required high resolution images. Unfortunately, none of the groups we have contacted have been able to provide these high resolution images in a timely manner. This is the main reason we have only included one data set in our manuscript, as this data was available via our collaborators with full resolution images. If we are able to get additional full resolution images in time for NeurIPS deadlines, we will include these as additional tests of our method. Additionally, we believe the current lack of publicly available full resolution images alongside published spatial transcriptomics datasets is a reflection of the field's current lack of means to include these images during data analysis. We hope that our work will raise awareness about the need and value of sharing full resolution images with spatial transcriptomics datasets, and incentivize more researchers to publish these. This will become increasingly important as more spatial transcriptomics analysis methods are developed that also benefit from full resolution image data, and will support the larger community goal of understanding the relationship of transcriptomics and imaging modalities. We will clarify these points in the camera-ready version of the manuscript if successful. | Summary: The authors introduce a method, called BLEEP, of imputing the (aggregate) gene expression profile of cells in patches of histology images. Inspired by CLIP, BLEEP trains image and profile encoders to jointly embed paired images and expression profiles, except in replaces the typical CLIP loss with a novel loss that matches the matrix of similarities between embeddings of image and expression profiles to a matrix of target similarities. Gene expression profiles are imputed for a patch by averaging the expression profiles of reference samples closest in the embedding space. The authors demonstrate SOTA predictive performance over related methods (e.g., HisToGene and ST-Net), while maintaining greater biological and spatial variability, and being less susceptible to batch effects.
Strengths: ### Originality
To the best of this reviewer's knowledge, this is the first use of a CLIP-like joint embedding objective for learning to predict gene expression profiles from H&E histology images. Also, the loss introduced by the authors in place of the typical CLIP loss appears to be novel.
### Quality
The work appears to be of sufficient quality to be credible.
### Clarity
The paper is well-written and easy to understand. The discussion is particularly detailed and interesting.
### Significance
Gene expression profile prediction from unstructured biological readouts like H&E images has significant value, since gene expression is interpretable and causal.
Weaknesses: - It is not clear that the loss introduced by the authors, replacing the typical CLIP loss, is required to obtain the reported performance as the the authors claim. A comparison with results obtained using the typical CLIP loss would be useful and should be added.
- The prediction procedure aggregates profiles from a reference dataset, making BLEEP limited in its potential application. It would be interesting to train an image-to-profile decoder and compare the results, especially wrt improved generalization. This is a heavy lift, though, and not required for the revisions.
- As the authors already point out, the aggregation used for query-reference imputation could be removing useful biological signal or have a number of smoothing effects. A discussion of the choices made for imputation and their effects on prediction performance should be added.
- Regarding the previous point, however, there is a lack of details regarding the query-reference imputation step, like number of samples used, method of aggregation.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - It is not clear if any hyperparameter tuning was done when choosing the default configurations of all models used in this paper. The huge differences in performance make this reviewer wonder if suboptimal parameters were chosen, even if published configurations were used for HisToGene and ST-Net. Please comment.
- It is not clear why HisToGene and ST-Net were chosen for comparison. Please comment.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The Discussion section adequately addresses some potential limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for these comments. We summarize the major points below along with our rebuttal to each point.
CLIP loss w/ smooth vs. w/o smooth comparison:
- Please see rebuttal addressed to all reviewers for ablation studies and discussion
Choices made for imputation and their effects on prediction
- Please see rebuttal addressed to all reviewers for ablation studies and discussion
Details on query-reference steps, implementation details:
- During query-reference imputation, we agree more details will be useful and plan to include them in the camera-ready version of our manuscript if we are successful during rebuttal. We will expand the discussion on implementation details in the revised version. In BLEEP’s default setting, we use the top 50 most similar samples for query reference imputation, and take their average expression for aggregation.
Settings for the comparison experiments:
- For HisToGene and ST-Net, we adopted the published configurations, as these parameters had been determined optimal in the original studies. Further, we conducted 3-trial experiments with different random initiations for more robust results. For BLEEP, extensive hyperparameter tuning was done during the development process, e.g. ViT backbone vs. Conv. Backbone for image encoder. Therefore, the performance differences are not due to manually chosen suboptimal parameters but reflect the inherent differences in these approaches. We will include evaluation scripts in the code release for community replication.
Comparison with HisToGene and ST-Net:
- HisToGene and ST-Net are state of the art methods for the task of expression prediction from images in this field and are not weak comparisons. The performances of these methods highlight the difficulty of the image to expression task and are competitive reflections of the current state of the field to our best knowledge.
- We surveyed many other related methods such as HE2RNA[1], hist2RNA[2], HIST2ST[3], and more recently, CeLEry[4] and SCHAF[5]. However they are not 1 to 1 comparisons that either take different information as input or tackle different settings with different use cases. For example, SCHAF requires both H&E image and the paired single cell RNA sequencing data rather than spatial transcriptomics data, the former containing no spatial information but higher expression resolution. In short all these surveyed methods were carefully evaluated and deemed not suitable to include as comparisons.
[1] Schmauch, Benoît, et al. "A deep learning model to predict RNA-Seq expression of tumours from whole slide images." Nature communications 11.1 (2020): 3877.
[2] Mondol, Raktim Kumar, et al. "hist2RNA: An efficient deep learning architecture to predict gene expression from breast cancer histopathology images." Cancers 15.9 (2023): 2569.
[3] Zeng, Yuansong, et al. "Spatial transcriptomics prediction from histology jointly through transformer and graph neural networks." Briefings in Bioinformatics 23.5 (2022): bbac297.
[4] Zhang, Qihuang, et al. "Leveraging spatial transcriptomics data to recover cell locations in single-cell RNA-seq with CeLEry." Nature Communications 14.1 (2023): 4050.
[5] Comiter, Charles, et al. "Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)." BioRxiv (2023): 2023-03.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal and attempts at clarifying the choices made. I will maintain my current rating. | Summary: The authors have developed the model titled BLEEP, which is a contrastive learning implementation, trained on data from the 10x Genomics Visium platform, a common spacial genomics platform. Spacial genomics generates high dimensional data that includes both images and RNA expression on small patches on tissue slides. BLEEP uses two destinct encoder for the image and expression data. These embeddings are then used as inputs for contrastive learning by a modified CLIP method. The approach of contrastive learning is sound given the high feature status in both the data which is being learned from and the images that are to be predicted.
The model is tested vs. benchmark models on marker gene, highly variable genes, and highly expressed genes and model shows modest increase in correlation to other models. BLEEP is also compared to how well gene expression is predicted for several genes. Figure 2 illustrates that the BLEEP model does show a better correlation to gene expression variation than other models. Additional figures show the heatmaps of actual relative to model predictions for fixed and variable scaled outputs. Section 4.3 discusses how the model is perhaps more robust to artifactual portions of images.
Strengths: I appreciate the color expressed around how "ill-posed" the problem is that the authors are trying to address. Likewise, there is a huge dimensionality challenge and the sequencing methods are rather flimsy as stated. Going into this problem realizing the challenge is very good.
The contrastive learning approach is likely the best approach the the task at hand.
The presentation of the data is not to express absolute confidence in the results and the authors only suggest that perhaps the methods are more sound given that the task itself may not be a solvable task.
Weaknesses: There is no guarantee that the signal that is being derived from given methods is in learning from the data presented. While the presentation of improved variance prediction is probably the strongest evidence in this regard.
Even though the model appears to outperform benchmarks, the correlations remain very low and likely the predictions are not particularly useful task.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Can you foresee any circumstance where the model could be used on image data alone to provide information at a low cost relative to running the spacial genomics platform.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: The main limitation is that this may not be a useful task to pursue. However, it does seem authors are aware of this limitation and if we don't try to tackle what seems like an impossible task then it is hard to make progress.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for these comments. We summarize the major points below along with our rebuttal to each point.
There is no guarantee that the signal that is being derived from given methods is in learning from the data presented. While the presentation of improved variance prediction is probably the strongest evidence in this regard:
- Our proposed method aims to learn a joint representation between H&E image and expression from the data presented. The improved variance prediction relative to other methods may be attributed to a well-learned joint embedding in conjunction with our choice of the query-reference inference strategy
Even though the model appears to outperform benchmarks, the correlations remain very low and likely the predictions are not particularly useful task:
- We thank the reviewer for this important question. We are aware that full expression prediction from image features is likely ill-posed. However we do believe there is substantial mutual information between image features and a subset of gene expressions. Our proposed contrastive learning objective helps prioritize these genes without any injection of prior knowledge. As seen in Table 2. The most well predicted genes for this dataset are also functional and well documented by prior biological research. For example, CYP1A2 is an oxidizing enzyme that is also widely cited as a liver zonation marker. Furthermore, the top predictions from BLEEP are rather consistent with top predictions by His2Gene and ST-Net, but with higher correlations to original expression, highlighting a step in a promising direction.
Can you foresee any circumstance where the model could be used on image data alone to provide information at a low cost relative to running the spatial genomics platform:
- Yes we believe our work may stimulate others to iterate upon our efforts and ultimately make advances towards this goal.
- As it stands currently, we chose to benchmark our method on 3 sets of genes (Marker genes, highly expressed genes and highly variable genes). We believe these 3 sets of genes are likely more enriched for genes useful for diagnosis and treatment (drug targets). Within these categories while average correlation hovers between 0.17-0.21, some subset of these genes are quite reasonably predicted and potentially useful for aforementioned applications. However we agree more work is needed to further examine the potential of BLEEP in this use case. We are undergoing followup experiments to examine the effect of increased reference size and working on expanding the reference to cover multiple tissue types
- Furthermore, this method could pave the way for the construction of more comprehensive foundation models for spatial transcriptomics analysis [1]
[1] Cui, Haotian, et al. "scGPT: Towards Building a Foundation Model for Single-Cell Multi-omics Using Generative AI." bioRxiv (2023): 2023-04.
---
Rebuttal Comment 1.1:
Comment: Thank you for your sound and clarifying comments. I maintain my official review scores. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their comments and their acknowledgment to the strengths of our paper including:
- “significantly improved performance” and “not losing out on the heterogeneity information.” (J9t4).
- “first use of a CLIP-like joint embedding objective for learning to predict gene expression profiles from H&E histology images” (QFTu)
- “The smoothed target and the query look-up strategy is a novel and intuitive contribution” (WHAP)
- “Contrastive learning approach is likely the best approach to the task at hand” (VLr1)
- “Experiment looks reasonable and interesting” (XnkF)
We have performed the requested ablation experiments from some of the reviewers. We share the resulting ablation table in the attached PDF below. From this experiment we make a few important observations:
- The choice of K during the query-reference imputation process influences the prediction quality quite negatively when a low value is selected (K = 10). Values of K above our default value could provide some small improvements to correlation of the resulting predicted expression values for the HVG and HEG gene sets, but the differences were not pronounced. This is interesting but inline with what one might expect from taking the pseudo-bulk of the top K most likely expression profiles given any query image. However, the MG gene set did not show much improvement with increasing K. Furthermore, doing so may carry a trade off of further systematically deviating from the original variance of the dataset due to the increased averaging effect. With this in mind, we feel our default value of K = 50 remains adequate.
- The most similar match between query and reference is usually not the best prediction (as seen from the 3rd row of the ablation table and indirectly the 4th row when predictions are weighted by their similarity). However we suspect the gap may close to some degree as the reference grows further in size, but in general some amount of averaging is desired for query-reference imputation to remove some noise intrinsic in the Visium platform. However, in the discussion section of the manuscript, we further highlight the possibility of genuine biological signals being averaged out, which is an important consideration to be further investigated.
- Smoothing the contrastive loss objective to take into account patch similarity showed modest increase in performance. The gain in performance may be due to the fact that relaxing the contrastive objective is more compatible with the similarity based inference strategy. The smoothing may help lessen the extent similar references are pushed apart in embedding space during training, resulting in improved querying of the top K most likely expression profiles given a query image patch during the inference stage.
Lastly, for all reviewers, we want to further motivate BLEEP and clarify our vision regarding future extensions of BLEEP and the project’s potential impact:
- The task of spatially resolved, transcriptome-wide expression prediction given an H&E stained image is likely extremely difficult. BLEEP offers an unbiased way to prioritize genes that are more likely to be well predicted from image features via the learning of a joint embedding between image and expression features. It also tackles the curse of dimensionality through the use of query-reference imputation while simultaneously alleviating some technical noise intrinsic to the Visium platform. These design choices allowed us to see significant improvements over state-of-the-art methods such as ST-Net and His2Gene (upwards of 120% increase in correlation with ground truth expression).
- BLEEP is currently restricted for research purposes only and still has a lot of room for improvement. Nevertheless, BLEEP may already be immediately useful for gaining biological understanding of H&E images due to the connections drawn between image and expression features. H&E staining is a ubiquitously used experimental technique in biology, our work may allow further understanding of H&E stained samples through learning the joint embedding. Similarly, clinical classification in the field of pathology could benefit from better understanding of image features in terms of molecular features, which underlie disease. Lastly, understanding the relationship of images and gene expression will help projects like the human cell atlas [1] to create coordinate frameworks that position genes spatially in terms of tissue anatomy, and ultimately in terms of the whole body.
- While our paper focuses on a single organ system, this choice was made to deeply explore and validate our approach to demonstrate the effectiveness and applicability of the proposed method. There are no technical barriers to extend BLEEP to other organs. Ongoing internal experiments on brain tissue have shown promising results of BLEEP’s generalization capacity. We are open to including the results of our experiments in the camera ready version if deemed necessary by the reviewer. In line with the recommendation of some reviewers, we have already begun efforts towards building foundation models that span multiple organ systems, and it will be released as future work.
- Finally, we believe expert annotation of the different regions of the H&E image will provide us with better ground truths for tissue-wide expression pattern benchmarking. This will be planned for future work.
[1] Rozenblatt-Rosen, Orit, et al. "The Human Cell Atlas: from vision to reality." Nature 550.7677 (2017): 451-453.
Pdf: /pdf/18232f05c54a6bee24278e2c8dcd474d7f00bc89.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the problem of gene expression profiling using histology images. They propose a bi-modal embedding framework BLEEP (Bi-modaL Embedding for Expression Prediction), which is capable of generating spatially resolved gene expression profiles of whole-slide Hematoxylin and eosin (H&E) stained histology images. The proposed method can significantly reduce the time and cost associated with gene expression profiling.
Strengths: 1. The paper utilizes a deep learning method to tackle the problem of gene expression prediction, reducing the time and cost.
2. The paper is well-written and easy to follow. The motivation is clearly illustrated.
3. The experiment design looks reasonable and interesting. Supplementary materials illustrate details of experimental settings.
Weaknesses: 1. Weak comparisons. This work only compares with two deep learning methods HisToGene and ST-Net. There should be other deep learning methods about image processing that can be applied to this area and the authors should compare with more baselines to confirm the efficacy of the proposed method.
2. Limited impact of the proposed method. Although the problem in this paper is interesting, this work should discuss broad impact of the proposed method. For instance, how can we apply this method to other biomedical applications.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed method can effectively address the problem of gene expression prediction. However, it is not clear whether we can apply this method to more applications.
Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for these comments. We summarize the major points below along with our rebuttal to each point.
Weak comparisons. This work only compares with two deep learning methods HisToGene and ST-Net:
- HisToGene and ST-Net are state of the art methods for the task of expression prediction from images in this field and are not weak comparisons. The performances of these methods highlight the difficulty of the image to expression task and are competitive reflections of the current state of the field to our best knowledge.
- We surveyed many other related methods such as HE2RNA[1], hist2RNA[2], HIST2ST[3], and more recently, CeLEry[4] and SCHAF[5]. However they are not 1 to 1 comparisons that either take different information as input or tackle different settings with different use cases. For example, SCHAF requires both H&E image and the paired single cell RNA sequencing data rather than spatial transcriptomics data, the former containing no spatial information but higher expression resolution. In short all these surveyed methods were carefully evaluated and deemed not suitable to include as comparisons.
[1] Schmauch, Benoît, et al. "A deep learning model to predict RNA-Seq expression of tumours from whole slide images." Nature communications 11.1 (2020): 3877.
[2] Mondol, Raktim Kumar, et al. "hist2RNA: An efficient deep learning architecture to predict gene expression from breast cancer histopathology images." Cancers 15.9 (2023): 2569.
[3] Zeng, Yuansong, et al. "Spatial transcriptomics prediction from histology jointly through transformer and graph neural networks." Briefings in Bioinformatics 23.5 (2022): bbac297.
[4] Zhang, Qihuang, et al. "Leveraging spatial transcriptomics data to recover cell locations in single-cell RNA-seq with CeLEry." Nature Communications 14.1 (2023): 4050.
[5] Comiter, Charles, et al. "Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)." BioRxiv (2023): 2023-03.
This work should discuss broader impact of the proposed method. For instance, how can we apply this method to other biomedical applications:
- We currently have no plans of applying this method to other biomedical applications. However, we are dedicating our efforts to extend this work to cover multiple organs and modalities, which we anticipate will help further improve performance and generalizability of BLEEP for prediction expression based on H&E image.
- In terms of broader impact, we anticipate that BLEEP may improve biological understanding of h&e images by drawing connections between image features and gene expression. H&E staining is a ubiquitously used experimental technique in biology, our work may allow further understanding of H&E stained samples through learning the joint embedding. Furthermore, clinical classification in the field of pathology could benefit from better understanding of image features in terms of molecular features, which underlie disease. Lastly, understanding the relationship of images and gene expression will help projects like the human cell atlas [6] to create coordinate frameworks that position genes spatially in terms of tissue anatomy, and ultimately in terms of the whole body.
[6] Rozenblatt-Rosen, Orit, et al. "The Human Cell Atlas: from vision to reality." Nature 550.7677 (2017): 451-453.
---
Rebuttal 2:
Comment: Thanks for the rebuttal. I think the authors have addressed my concerns as they have explained their broader impact in healthcare. I would like to raise my score. | null | null | null | null | null | null |
Strategyproof Voting under Correlated Beliefs | Accept (poster) | Summary: Post rebuttal: I improved my scores and encourage the authors to include the information from the rebuttal in the final text.
The authors study strategy-proofness in voting under the assumption that the voters do not have, as is usual, full knowledge about the votes/preferences of the other voters, but rather have some form of beliefs about them. Specifically, they consider that the election they participate in was generated using the Mallows model and either their vote is the central one (confident setting) or not. They also consider several other similar distributions. Their main result is that in the setting, Plurality is strategy-proof.
On the mathematical level the paper is OK, but I have serious doubts about the significance of the result. In short, the assumed belief models mean that from the perspective of the considered agent, she is winning. Indeed, if the central ballot agrees with the current vote, the the top choice candidate is expected to have the highest plurality score. Then, strategy-proofness for Plurality is essentially built-in into the definition. The situation where strategy proofness and strategic behavior is interesting is where there is expected contention. The current model does not capture this.
Another issue is that, as far as I can tell, the message from the paper would be to use the Plurality rule. But this certainly is not a rule we would like to use.
All in all, the paper certainly makes a step in an interesting direction and, perhaps, this direction would eventually lead to interesting results. However, in the current shape, it is quite far from giving any definite answers (and, as such, can certainly find home in more focused and specialized conferences).
Strengths: makes some progress in a widely studied topic
Weaknesses: - the results are not relevant practically
- the strategyproofness for Plurality is, in essence, built into the considered model
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Would you consider your results as arguments for using the Plurality rule in practice?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations were addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Reviewer comment:**
> In short, the assumed belief models mean that from the perspective of the considered agent, she is winning. Indeed, if the central ballot agrees with the current vote, the the top choice candidate is expected to have the highest plurality score. Then, strategy-proofness for Plurality is essentially built-in into the definition. The situation where strategy proofness and strategic behavior is interesting is where there is expected contention. The current model does not capture this.
**Response:**
This seems to be the reviewer's main concern. We are confident that it stems from a misunderstanding, and believe the rebuttal should fully address it.
First, the reviewer appears to be referring to the "confident" versions of our model, whereas our results also apply to the "unconfident" versions.
Second, and much more importantly, the statement "the top choice candidate is expected to have the highest plurality score" is also true when "plurality" is replaced with, say, "Borda" — but our results show that Borda is not OBIC in our setting! The question is not whether the manipulator's top choice has the highest chance of winning (not to mention the highest expected score), but whether reporting a different ranking leads to this candidate winning *even more often*. It is not the case that one implies the other. Indeed, almost all rules satisfy the first, but, of the ones we study, only plurality avoids the second. In summary, the reviewer's justification for the assertion that "strategy-proofness for Plurality is essentially built-in into the definition" is incorrect.
**Reviewer comment:**
> Another issue is that, as far as I can tell, the message from the paper would be to use the Plurality rule. But this certainly is not a rule we would like to use. [...] Would you consider your results as arguments for using the Plurality rule in practice?
**Response:**
We disagree with the statement that plurality "is not a rule we would like to use": with few exceptions, it's the voting rule that's always used in practice.
Now, based on anecdotal evidence, many social choice theorists do seem to prefer other voting rules to plurality. But this preference is based on a variety of criteria for comparing voting rules: axiomatic desiderata, maximum likelihood estimation under various noise models, distortion, distance rationalizability, etc. Simplicity of preference elicitation is another important criterion, in which plurality excels.
Strategyproofness typically isn't a primary criterion in the comparison of voting rules, due to Gibbard-Satterthwaite. Our approach and results can be seen as potentially elevating this criterion so that it can augment the set of criteria that are being examined. Whether this would tilt the scales in favor of plurality is subjective, but we believe it's a criterion that should certainly play a role in the discussion.
To explicitly answer your question: yes, we view our results as arguments for using plurality in practice, which should be considered alongside other theoretical and empirical arguments.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I accept your arguments and I like that you were bold enough to clarify your view on Plurality. I think the paper would benefit from making them clear in the text as well (although, admittingly, researchers will challenge your view).
I am still not overly excited with the work, but sufficiently to not choose rejecting evaluations.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your attention to our rebuttal. We'll be sure to incorporate the points raised in the rebuttal into the revised version of the paper. | Summary: This paper explores a probabilistic form of strategy-proofness for voting rules referred to as Ordinally Bayesian Incentive Compatible (OBIC) in a setting where voters believe that other voters have correlated votes. The paper considers both the situation when voters believe others have preferences similar to their own, and when they believe other voters have correlated preferences that are not necessarily similar to their own. The primary positive result of the paper explores three correlated preference/belief models and shows that they are "top-choice correlated"; ie. under them, when all else is equal a "voter's top choice is likely to perform better than other candidates." It is then shown that under any top-choice correlated beliefs the plurality rule is OBIC. Subsequent negative results identify a number of specific situations where other scoring rules are not OBIC.
Strengths: The paper is generally well written. Theorems are clearly stated and there is a strong combination of high level explanation alongside technical definitions. Reasonable motivation is given for studying the questions focused on in the paper, and it is already a well-established area of research.
While there is a great deal of research on problems of this nature under independent beliefs, thus far the body of work examining results under correlated beliefs is comparatively small and much more recent. This paper adds novel results to the domain. The significance of any single result in a paper such as this is typically not massive but addressing questions larger than those studied here is quite difficult to fit into a single paper. In that sense, the results are of very reasonable significance.
Weaknesses: Generally the paper does a good job of explaining high level meanings behind the more heavy annotation however the proof sketches are a fair bit more involved than I would expect. As is common these days, full proofs are not given in the paper itself but relegated to the appendices.
As noted above, the results are not hugely significant but I do not believe it is realistic to publish papers in this domain only if they are groundbreaking. The amount of work required for incremental progress is significant.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I found the initial motivating example quite reasonable but it loosely acknowledges that any real election does not have an infinite number of voters. Theorem 2 goes on to rely on "sufficiently large n" which seems weakened given the motivation. Is that a reasonable criticism and does that weaken the impact of the result? (Admittedly, the theorem is already rather removed from having impacts on real-world elections)
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors are clear about the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Reviewer Comment:**
> Theorem 2 goes on to rely on "sufficiently large n" which seems weakened given the motivation. Is that a reasonable criticism and does that weaken the impact of the result?
**Response:**
This requirement is theoretically necessary, because it is possible to define positional scoring rules that are arbitrarily close to Plurality (say giving some $\varepsilon \ll 1$ points to the second place candidate). However, we don't believe this greatly hinders the impact of the result, when viewed in conjunction with our other results. Indeed, Theorem 3 shows that Borda fails for any $n \ge 2$, and explicit computation shows that many other "reasonable" rules also fail for small $n$.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clear and sufficient response and leave this comment to acknowledge reading it. I will uphold my review as it is. | Summary: This paper presents several results related to strategy proof voting rules when the set of agents has correlated beliefs. The classic results in social choice theory assume that a manipulating agent has access to the entire set of preferences of all agents, while in the setting discussed in the paper we assume that agents have correlated beliefs and we want to find voting rules that are incentive compatible under a Bayesian model (typically called OBIC rules). The paper provides a positive result in that plurality is OBIC for a number of popular models (Mallows, PL) and show negative results for other positional scoring rules (PSRs) under Mallows and a few results on Copeland and Maximin.
Strengths: + The result that plurality is SP under a large space of correlated beliefs is interesting and a nicely proven result.
+ The writing is good and well presented, the paper sets itself well in the literature.
Weaknesses: - My biggest comment here is fit for NeurIPS. This is a pretty straight social choice paper. While there are 2 references to ICML papers the bulk of the paper is a pretty straight statistical analysis/bounds paper (AI Stats?). This isn't all bad but it would be nice to include at least a bit more discussion on the relevance to the venue.
- The results are largely negative when we have n > 3 or n=3 in that most rules are no OBIC -- while I like the result for plurality this ties with my last point: what's the take away here for the ML community?
### Minor Issues:
* Maybe add the plurality result to Table 2 so it is complete for the entire set of results in the paper.
* "strategy-proof for a large class of beliefs containing the specific ones we introduce" --> this isn't clear in the abstract, please revise.
* Note that all the rules we consider are Pareto efficient. --> minor quibble but at line 81 where this is introduced and it is never defined or returned to.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ? What about Urn models? It seems like the conditions for Lemma 1/2 would hold for Urn type models but I'm not sure.
-------
After Rebuttal:
Thanks for the rebuttal and answering my questions -- I overall liked this paper and glad we could identify a place to strengthen the work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitation discussion is not really there but at the end of the day the paper is well scoped so doesn't need to be added (though see weakness above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Reviewer Comment:**
> My biggest comment here is fit for NeurIPS. This is a pretty straight social choice paper. While there are 2 references to ICML papers the bulk of the paper is a pretty straight statistical analysis/bounds paper (AI Stats?). This isn't all bad but it would be nice to include at least a bit more discussion on the relevance to the venue.
**Response:**
Computational social choice in general has long been of interest to AI/ML researchers. In NeurIPS alone there have been many examples of this over the last decade.
With respect to voting under stochastic ranking models, the topic we study, NeurIPS papers include:
* "Random utility theory for social choice" (Azari Soufiani et al. 2012)
* "Generalized method-of-moments for rank aggregation" (Azari Soufiani et al. 2013)
* "Diverse randomized agents vote to win" (Jiang et al., 2015)
* "Is approval voting optimal given approval votes?"" (Procaccia and Shah, 2015)
* "Axioms for learning from pairwise comparisons" (Noothigattu et al., 2020)
In terms of computational social choice more broadly, NeurIPS papers include:
* Citizen's assemblies: "Neutralizing self-selection bias in sampling for sortition" (Flanigan et al., 2020), "Fair sortition made transparent" (Flanigan et al., 2021)
* Participatory budgeting: "Proportional participatory budgeting with additive utilities" (Peters et al., 2021)
* Models of representative democracy: "A mathematical model for optimal decisions in a representative democracy" (Magdon-Ismail and Xia, 2018)
* Distortion of voting rules: "Efficient and thrifty voting by any means necessary" (Mandal et al., 2019)
* Smoothed analysis: "The smoothed possibility of social choice" (Xia, 2020)
With this in mind, the topic is quite relevant to at least a sizable subset of the NeurIPS community. We would also certainly include more of the aforementioned citations in a camera-ready revision.
**Reviewer Comment:**
> What about Urn models? It seems like the conditions for Lemma 1/2 would hold for Urn type models but I'm not sure.
**Response:**
We are not currently aware of urn models that generate rankings. However, we certainly believe that a large class of models that lead to some level of correlation (as preferrential-attachment models tend to in other settings) would satisfy Lemma 1.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: 1) So I wasn't asking for a list of COMSOC papers but again providing more of this in the context of the paper itself would greatly improve it's positioning. So thanks for the list but better to explain how these papers intersect with the topics of the current paper in the context of the conference.
Either way, I leave this point for for the AC than myself.
2) All Urn models generate rankings.. e.g., https://www.docs.preflib.org/reference/instances/sampling.html -- not sure what you mean by not generate rankings...
---
Reply to Comment 1.1.1:
Title: Urn Model Result
Comment: Thank you for your attention to the rebuttal. We were not aware of these urn-based models of generating profiles and appreciate the pointer! Indeed, we would certainly believe such a model should satisfy the conditions of Lemma 1 as long as $r > 0$ (where $r$ is the number of balls of that color added after it is sampled).
As a brief proof sketch, we likely need to make use of the [exchangeability property](https://en.wikipedia.org/wiki/Pólya_urn_model#Exchangeability) of these processes. Using this, we only need to show Lemma 1 holds for the last voter sampled, as the property implies the conditional distributions of other voters should be identical. For the last voter, the probability they observe a ranking $\sigma$ is proportional to $r \cdot N_\sigma + 1$ where $N_\sigma$ is the number of other voters with ranking $\sigma$. For Lemma 1, using Bayes' rule, this should almost immediately imply that it is more likely $a$ has a higher plurality score than $b$.
We would certainly be open to including this in a camera-ready revision as an additional model demonstrating our results (with details in the appendix, of course). | Summary: The paper considers a typical social choice problem: to design a voting rule that has desirable properties (e.g., it is onto and non-dictatorship) and it does not enable voters to misreport their vote for achieving outcomes that she prefers more. It is known that this is impossible in general, even if one allows the voter to have a prior knowledge on the preferences of other voters. This work focuses on a particular structure of this prior knowledge, namely that other voters' preferences are correlated to the manipulating voter's preferences according to classical models for the generation of preferences in social choice as Mallows model, Placket-Luce model, and Thurstone-Mosteller model.
Quite surprisingly, the paper proves that plurality dynamics is strategyproof when the agent has prior knowledge compatible with these models of preference generation, while other positional scoring (and Copeland and maxmin) rules are not, unless for a large number of voters.
Strengths: The problem is a well-established problem in social choice, and the paper provides a very positive result (the existence of a very natural voting rule that is strategyproof in a very realistic setting).
Moreover, the result is also surprising (the fact that plurality enjoy the property, while other rules fail.) This result is in a way also robust. Indeed the author proves strategyproofness of plurality not only for the above cited models of preference generation, but for a superclass of them (that contains also noisy variations of above models whenever noise is small).
Weaknesses: As recognized by the authors: the negative results only holds for three candidates (it is thus possible but conjectured to be improbable that for more than three candidates plurality is not the unique rule to enjoy all desired properties).
The paper only provides an analysis of specific voting rules, but not a characterization of strategyproof rules.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I believe that there is an error in the last two equations on page 6 (even if they do not affect the final result). Indeed it should be 1/(|C|+1) (u(C) - |C|u(a)) >= |C|/(|C|+1) (u(c) - u(a)), for c with minimum u(c) among all c in C (similar for the last equation). Am I right?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Reviewer Comment:**
> I believe that there is an error in the last two equations on page 6 (even if they do not affect the final result). Indeed it should be $1/(|C|+1) (u(C) - |C|u(a)) \geq |C|/(|C|+1) (u(c) - u(a))$, for $c$ with minimum $u(c)$ among all $c$ in $C$ (similar for the last equation). Am I right?
**Response:**
We don't believe so. Recall that we defined the utility of a set of candidates to be the mean of the individual utilties, so $u(S) = \frac{1}{|S|}\sum_{c \in S} u(c)$. Therefore, $u(C \cup \{a\}) = \frac{\sum_{c \in C \cup \{a\}} u(c)}{|C| + 1} = \frac{(\sum_{c \in C} u(c)) + u(a)}{|C| + 1} = \frac{|C|}{|C| + 1} \cdot u(C) + \frac{1}{|C| + 1}u(a).$ When we subtract $u(a)$ from this quantity, we get equality to $\frac{|C|}{|C| + 1} \cdot u(C) - \frac{|C|}{|C| + 1}u(a).$ We will certainly add more justification to this step in a camera-ready revision.
---
Rebuttal Comment 1.1:
Comment: Thanks. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the voting problem where agents' ranking preferences are correlated. Roughly speaking, voters do not know exactly each other's preferences; when a voter knows his/her own preference, (s)he can "infer" others' preferences. Strategyproofness is then defined with expected utilities. This is a practical setting where the classical Gibbard-Satterthwaite impossibility theorem does not apply. To model this, a prior distribution of voters' preferences is defined, and then a voter, after "receiving" his/her own preference, has a posterior belief over the preference distribution. The authors consider a wide range of distributions including Mallows (with both "confident" and "unconfident" variants), Placket-Luce, and Thurstone-Mosteller. The authors show that, among all the positional scoring rules (a positional scoring rule assigns a score to each ranking position, and the output depends only on the scores), the plurality voting rule is the only one that is "strategyproof" (or, OBIC as defined in the paper). Specifically, the authors show that the plurality voting rule is OBIC. On the other hand, with three alternatives, any other voting rule is not OBIC when the number of voters is sufficiently large. For Borda Count with three alternatives, it fails to be OBIC with any number of voters that is at least 2. One key observation is that any of the abovementioned distributions satisfies the so-called top-choice correlated property.
Strengths: I believe the model studied in this paper is very reasonable and practical. The main results of this paper are neat and clean. The main message that the plurality voting rule stands out is also clean and promising. I think this paper provides a significant contribution to the social choice literature.
Weaknesses: I find the result that the plurality rule being OBIC is not surprising. It is expected that the signal distributions have the top-choice correlated property, and it is also not very surprising that, with this property, plurality voting is OBIC. It may be more surprising to see that other voting rules fail to be OBIC.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: No question.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | null | null | null | null | null | null | null |
Classification of Heavy-tailed Features in High Dimensions: a Superstatistical Approach | Accept (poster) | Summary: The paper is concerned with binary classification, when the data comes from two point clouds that are superposition of Gaussian distribution. This model allows for data distribution with fat tails. The authors analyse the performance of empirical risk minimization in the high-dimensional regime where the number of training samples and the dimension jointly diverge. Using the replica method from statistical physics, they reduce the computation of e.g the training loss / generalization error to the resolution of self-consistent equations.
In the third section of the paper, the authors apply their main result on experiments with synthetic data.
Strengths: The paper is clearly written, and the mathematical derivations are easy to follow. The application of the replica method on a mixture of superposition of Gaussian distributions is new, to the best of my knowledge.
Weaknesses: The paper lacks an experiment on real data to showcase situations in which the data model used in this paper (superposition of Gaussian) is more realistic / useful than simply using a mixture of two Gaussians.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the computations be easily extended to :
1) other types of estimators, e.g. Bayesian estimators that sample from a posterior distribution instead of ERM
2) more generic covariance for the Gaussian distributions. For instance instead of $N(\mu, \Delta I_d)$, use $N(\mu, \Delta \Sigma)$ where $\Sigma is a generic covariance matrix.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for her/his positive comments on our paper. We summarise our answers to her/his questions below.
* As foreseen by the referee, the method can indeed be applied to the study of an estimator obtained by minimisation of a proper convex function, and in particular, to the study of the Bayes optimal estimator itself (in this case, the convexity of the problem in the Bayes optimal setting is guaranteed by the so-called "Nishimori conditions" in the statistical physics jargon). As the Bayes optimal estimator is indeed an important reference for our results, stimulated by the question of the referee we have derived the expression for the Bayes optimal error in a new Appendix A.4 and added, as guide for comparison, the corresponding error curves in our plots.
* The case suggested by the referee is indeed within the reach of our theory, which can be easily generalised to consider clouds of the type $P(\boldsymbol{x}|\boldsymbol \mu)=\mathbb E[\mathcal N(\boldsymbol x;\boldsymbol\mu,\Delta\boldsymbol\Sigma)]$, possibly with a different covariance matrix for each cloud (the case of identical $\boldsymbol\Sigma$ can be reduced to the current setting by a simple change of variables). For example, by supposing that the two clouds are described by a density $P_\pm(\boldsymbol x|\boldsymbol\mu_\pm)=\mathbb E[\mathcal N(\boldsymbol x;\boldsymbol\mu_\pm,\Delta\boldsymbol\Sigma)]$, fixed-point equations similar to the ones presented in the paper would be obtained with respect to scalar order parameters, although two order parameters $\boldsymbol q_\pm$ would be needed in this case, each taking into account the correlation between the estimator $\boldsymbol w^\star$ and each one of the covariances. To avoid this complication, we restricted our analysis to the simplest case for illustrative purposes: we however added a line in the main text commenting about this more general setting pointed out by the referee. We mention here that the case in which the matrix $\boldsymbol\Sigma$ is assumed to be random is, instead, much more challenging to analyse: the dependence of the order parameters on the additional stochasticity is, in this case, much more complicated than the one appearing in Eq. 47 and does not allow the simple factorisation given therein.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: I thank the authors for their detailed response and appreciate the addition of results for the Bayes-optimal estimator. This leads me to increase my score. | Summary: This paper investigates the asymptotic behavior of Generalized Linear Models (GLM) when the number of training samples $n$, and the dimension of the feature-space $d$ both go to infinity, but the ratio $n/d$ is fixed to some known bounded value $\alpha$. Moreover, authors assume the training data points are drawn from a mixture of two heavy-tailed distributions, which is different from the usual Gaussian assumption in most of the existing works (the mentioned heavy-tailed distributions are constructed by combining uncountably infinitely many Gaussian distributions).
Paper claims to achieve a non-trivial asymptotic characterization of this problem setting, and also try to validate it via a number of experiments on synthetic data. Paper has a number of shortcomings, therefore my current vote is borderline reject. Presentation of the main results needs to be significantly improved, and also I would like to see the comments from other reviewers with more expertise in this particular field to assess the level of technical contribution in this work.
Strengths: - Paper is well-written (at least in most parts), and the literature review part in the introduction section is very informative.
- I have not completely checked the proofs, however, I have not noticed any mathematical mistakes. The technical validity of the theoretical part looks fine. I have not checked the experimental parts.
Weaknesses: - All the theoretical derivations are based on asymptotics, while any result in the non-asymptotic case would be far more interesting.
- I suggest presenting the results more formally, i.e., in the form of Theorems, Lemmas, and etc. Otherwise, the actual level of technical contribution in this work becomes hard to assess. Right now, there are no theorems inside the manuscript. Also, the explanations from L.130 to L.158 are vague (please see the questions section).
- The main motivation behind this work is to assume non-Gaussian distributions (with a possibly infinite covariance) as the components of the mixture model which generates the data. I am concerned with how much this setting would look imporntant and/or interesting to the community. Due to the Gaussian universality principle, the analysis based on the Gaussian assumptions applies (more or less) to all "Gaussian-like" distributions (which covers almost all distributions with bounded moments) as well. Heavy-tailed distributions with power-law tails which do not have a bounded covariance are of course excluded from this list, but how important are they? IMO, authors have not given enough motivation regarding this issue.
- The process which is used to generate the above-mentioned heavy-tailed distributions is very specific: superposition of an uncountably many Gaussians, or equivalently assuming that the covariance matrix of the Gaussian itself is a R.V. with an inverse-Gamma distribution. Authors have not discussed the limitation of this process. How general is it? does it include almost all heavy-tailed distributions?
- I have not completely checked the proofs in supp. However, the mathematical tools used for deriving the results are not sophisticated. Not using sophisticated math or not relying on existing elegant theorems is fine, as long as an important problem has been solved or an interesting discovery has been made. This again takes us to a previous comment, on the importance level of this problem setting. I am not familiar with this particular line of research, so I have to wait for other reviewers to comment on that.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: L.130 to L.158: Results are not properly presented. I suggest using a formal theorem and a set of lemmas.
L.139, Eq (4): What are $\boldsymbol{g}$ and $\boldsymbol{h}$?
L.140, Eq (5): What are $h_{\pm}$, $\omega_{\pm}$, $q$ and ... Actually this list can go on.
The main theoretical contributions are presented in Eq (8) and Eq (9). However, the vague explanation preceding them, would impose a huge negative impact on the potential reader.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for her/his time and comments. Here are some observations concerning the points raised in the report:
* We agree on the fact that the non-asymptotic behavior is also an interesting problem to consider. In this paper, we have worked in line with a large body of literature that focuses on the asymptotic behavior, however, finite-size *corrections* to the asymptotic behavior are also accessible with the adopted approach (although quite cumbersome) by taking into account corrections to the saddle-point approximation [see, for a review, Lucibello, arXiv:1502.02471]. To our knowledge, this is actually an interesting problem that has not yet been investigated even within the (a priori much simpler) pure Gaussian case. We will keep this suggestion in mind for a future investigation and we thank the referee for raising this point.
* We have re-organized the material highlighting our main results throughout the text. The set of fixed point equations (7) has been obtained using the heuristic replica method in Appendix A, which we are confident could be made fully rigorous via standard techniques such as Approximate Message Passing (AMP) [Loureiro et al., 2021] or Gordon Minimax techniques [Mignacco et al., 2020]. We will clarify this point in the text with references to the original works. Note that rigorous proof may well require finer technical assumptions. All the following results in our work are instead obtained through fully rigorous derivations from the result in Section 2. We hope that the new form improves the readability of the manuscript.
* We thank the referee for giving us the opportunity to clarify this important point. The goal of the paper is to “challenge” the validity of a Gaussian universality principle by proposing a model that is under full analytical control and includes the possibility of non-Gaussian datasets. On the one hand, our contribution covers the case of distributions with infinite covariance: such distributions are clearly out of the applicability range of a possible Gaussian universality principle, as correctly stated by the referee. We present this result in this case as a powerful byproduct of our analysis, as, typically, the available theoretical works impose some constraints on the finiteness of the moments, whereas we have no such condition. On the other hand, with respect to the Gaussian universality literature, our most surprising result concerns the case of distributions with *finite* second moment. In the examples of Section 3.1 and Section 3.5, in particular, the existence of moments is controlled via the parameter $a$ of the variance distribution, so that the data distribution has an unbounded k-th moment if $k\geq 2a$. Crucially, in such cases Gaussian universality breaks down due to the presence of fat tails, i.e., a Gaussian approximation of the dataset (obtained by matching first and second moment) *does not* reproduce the correct asymptotics. We thus *analytically* show that the performances in terms of generalisation do depend on higher moments. We have further clarified this point in our *Introduction*.
* We would like to thank the referee for raising this important point. It is true in general that any distribution can be approximated (in the L1 sense) by a possible uncountable superposition of Gaussians, whose means and covariances are given according to some law (see, e.g. [Alspach and Sorenson, IEEE Trans. Autom. Control, 17(4), 439-448, 1972]). We leverage this powerful result by considering the case of a law on the second moment of a Gaussian distribution (as a result, the generated functions are even in $\boldsymbol x$); the resulting family is large enough to allow the analysis of *any* tail behavior which was a central goal of the paper. We have added a comment with respect to this in the *Introduction*, with new due references. Moreover, the idea of constructing distributions by taking their parameters as random variables themselves appeared in various disciplines, albeit is known under different names: if in statistical physics it is known as superstatistics [Beck, *Recent developments in superstatistics* 2008], there is also a considerable line of work in Bayesian modeling regarding hierarchical priors and models [Gelman and Hill, *Data Analysis Using Regression and Multilevel/Hierarchical Models*, 2006; Gelman et al., *Bayesian Data Analysis*, 2013], while in probability and statistics such distributions are known as compound probability distributions [Robbins, *Asymptotically Subminimax Solutions of Compound Statistical Decision Problems*, 1985], or as doubly-stochastic models in stochastic processes in particular [Pinsky and Karlin, *An Introduction to Stochastic Modeling*, 2010; Schnoerr et al., *Cox process representation and inference for stochastic reaction-diffusion processes*, 2016]. Such superpositions of distributions are also readily used in direct applications to describe non-Gaussian data in quantitative finance [Delpini and Bormetti, *Minimal model of financial stylized facts* 2011; Langrene et al., *Switching to non-affine stochastic volatility: A closed-form expansion for the Inverse Gamma model*, 2015] or econometrics models [Nelson, *ARCH models as diffusion approximations*, 1990].
* We have carefully re-read and revised the manuscript, and restructured the presentation of our results. The quantity $\boldsymbol g$ is defined in Eq. 6, right after its appearance in Eq. 4, whilst $\boldsymbol h$ is defined in line 141 via the quantities $h_\pm$ and $\omega_\pm$ introduced by Eq. 5 (in the new version of our manuscript we have now used the notation $\coloneqq$ to clarify that we are indeed defining the quantities therein). Similarly, all order parameters are defined as the solutions of Eq. 7. We hope that the new version of the manuscript will be more clear, and we thank the referee for pointing out places where the clarity of the text could be improved.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the author(s) for their detailed response. After reading the rebuttal and also other reviewers' discussions I decide to slightly raise my rating, but reduce my confidence score. | Summary: The paper is focused on the non-Gaussian mixture model and asymptotical investigation of the asymptotic characterization of the statistics of the empirical risk minimization estimator. The paper takes under consideration the models with two clusters and applies their analysis to the convex loss functions and regularizers. The empirical evaluations investigate the theoretical aspects in practice.
Strengths: - The theoretical analysis of generalized linear models is an exciting direction, especially in the contents of high-dimensional data.
- The empirical evaluation of synthetic use cases seems to confirm the theoretical investigations.
Weaknesses: - The paper is chaotic and very difficult to follow. It makes the work very difficult to understand. The listed contusions are not defined to the point. They are focused on studying and analyzing, not on practical outcomes. The work is not even summarized (taking into account that there is still some space in the manuscript, it seems to be strange) in the conclusions section, and limitations are not discussed well. Some explanations of crucial symbols are missing in the paper, and the motivations behind some steps are not explained well.
- The proposed theoretical investigation is limited to the models with two classes. How can the results scale to multiclass scenarios?
- The empirical evaluation is limited only to artificial cases. The problem investigated by the authors is very practical, and it is crucial to provide some empirical evaluation using real datasets.
- There are many ways to go beyond the Gaussian distribution. Normalizing flows may be used to model the distributions for each of the considered clusters as an alternative to this approach. It would be beneficial to discuss this issue in the paper and even provide some empirical comparison to the approach.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Please refer to the remarks from the Weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: The paper is not organized properly, it is problematic to identify contributions and a strong plot in the work. Empirical evaluation on real cases is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the referee for her/his time in reading and evaluating the manuscript. As a general comment, we would like to stress that the goal of our work was to provide, for the first time, a theoretical model to analytically handle the asymptotic properties of classification estimators on non-Gaussian mixtures, in such a way that the effect of non-Gaussianity can be kept *fully* under control and compared with equivalent performances on Gaussian models.
Although the framework of the analysis is therefore, at least at this stage, purely theoretical, it provides a number of important insights: it sheds light on the validity of recent Gaussian universality claims going beyond a large literature which was, up to now, limited to the purely Gaussian case as stated in our *Introduction*. The insights include, but are not limited to, analytical expressions for the generalisation, training errors, and training loss; analytical formulas for the Bayes-optimal performance and data separability threshold (maximum number of samples possible to perfectly interpolate for a given dimension); role of the regularisation strength; validity of Gaussian universality principle on structured and random-labeled datasets. All these results were determined on a very large class of non-Gaussian distributions, with full control over the fatness of tails and even (non-)existence of data covariance.
* We would like to thank the referee for the feedback: we revisited the manuscript and improved the readability, taking into account all comments. In particular, we have added a *Conclusions and perspectives* section in which the results have been summarised. We have also discussed more clearly the limitations with respect to the treatment of real datasets in that section. There, we highlight that the main difficulty, in this case, is the choice of the best distribution $\varrho$ given the observed dataset, a problem that however has a long tradition in the context of Bayesian estimation [Alspach and Sorenson, *Nonlinear Bayesian estimation using Gaussian sum approximations*, 1972; Gelman et al., *Bayesian Data Analysis*, 2013]. In a more simplistic approach, it can be observed that, in the case in which the square loss is adopted, the self-consistent equations depend on $\mathbb E[(1+v\Delta)^{-1}]$ and $\mathbb E[(1+v\Delta)^{-2}]$ only, and these quantities can be numerically estimated from the dataset. The exact evaluation of the quality and limitations of such an approximation on a real dataset are left for future investigation. We would like to stress, however, that taking parameters of distributions themselves as random variables, resulting in superpositions of distributions are readily used in direct applications to describe non-Gaussian data in quantitative finance [Delpini and Bormetti, *Minimal model of financial stylized facts*, 2011; Langrene et al., *Switching to non-affine stochastic volatility: A closed-form expansion for the Inverse Gamma model*, 2015] or econometrics models [Nelson, *ARCH models as diffusion approximations*, 1990], so there already exist schemes for convenient choices in for some types of datasets.
* The theoretical model can be easily generalised to the case of $K$ classes: in the main text, we limited ourselves to the case of 2 classes for the sake of simplicity and clarity. However, to support this answer, we have modified, in Appendix A, the derivation of our results to include the case of $K$ clusters with scalar labels. Other variations (eg, one-hot-encoding labeling) can be obtained following the formalism in the cited reference [Loureiro et al., 2021].
* The restriction to synthetic datasets is due to the following reason: the provided theoretical prediction relies on the knowledge of the distribution $\varrho$ which, in the case of empirical datasets, can be estimated but is in general not known (see also above). One experiment we would like to perform in the future is indeed the comparison of the results of numerical experiments on real datasets with theoretical predictions obtained after an empirical estimation of $\varrho$, in the spirit of the analysis in [Loureiro et al., 2021] for the case of Gaussian mixtures. On the other hand, as mentioned above, our focus in this contribution was the fact that it is indeed possible to have a simple model within a non-Gaussian setting fully under control and observe therefore a breakdown of Gaussian universality results in it.
* Although normalising flows are definitely a way to produce non-Gaussian distributions and possibly model a non-Gaussian dataset, as mentioned above the goal of our paper is actually not to provide a tool to model datasets but rather an exactly solvable model, such that the presence of non-Gaussian features can be taken into account and exactly treated. Note that the superposition of the variance distribution with a Gaussian data distribution specifically, while generating a very large family of distributions including ones with any power-law tail, is necessary to employ our analytic method, where Gaussian integration can be conveniently performed by virtue of this data construction starting from Gaussian. We are not aware of references where high-dimensional asymptotics on non-Gaussian distributions is analytically obtained via normalising flows, and we would be extremely grateful to the referee if she/he could point us to some pertinent references, which we will gladly add to the manuscript.
---
Rebuttal Comment 1.1:
Title: Thank you for rebuttal
Comment: I would like to thank the authors for the clarification during the rebuttal. I read the paper one more time, as well as other reviews and comments. After clarification, I appreciate the theoretical contribution of this work. I still think that the paper requires some rewriting to make it more accessible to the larger community. Moreover, I think that empirical evaluation of real cases is possible at least as a showcase for theoretical considerations. I decided to raise my score. | Summary: The paper derive a theory for training and generalization error when classifying a large number of points from a non-Gaussian high-dimensional data distribution. The data model is a double-stochastic process where a parameter is sampled from a scalar distribution and then a sample is taken from a Gaussian distribution with this parameter as variance. A self-consistent mean-field theory is provided for the case where the number of points is large and proportional to the dimensionality and the equations can be numerically solved for logistic and square losses. The theory is applied to data-sets with finite and infinite covariance, to study the role of regularization, and to estimate the separability threshold for such data. This highlighting both cases of “Gaussian universality” where the results coincide with previous “Gaussian” literature and deviation from such universality.
Strengths: * Originality: the tasks and methods are not new, and previous contributions are very well introduced. The originality of the work lies in the successful calculation of the theory for the non-Gaussian case, which is a valuable contribution. Furthermore, the work provides a basis for analyzing when an extrapolation beyond the Gaussian case is justified.
* Significance: the paper is important in highlighting where non-Gaussian data may diverge from the Gaussian case discussed in the literature, and as such it opens a venue for future work to use non-Gaussian analysis of real-worlds data, which is an important direction. The conclusions about test error in Gaussian vs non-Gaussian cases is non-trivial (the inversion between figure 1+2 and 3) and as is the finding or optimal finite regularization value for the non-Gaussian case.
* Clarity: the paper is in general well-written and can be served as exemplar for providing complicated theoretical results without sacrificing the clarity of the ideas.
* Quality: the paper seems technically very sound, with impressive combination of theory and simulations.
Weaknesses: * Clarity: some of the notations and ideas presented are only hastily introduced, with two prominent examples being “Superstatistical Features” (from the title) and “uncountable superposition” (from the abstract, which seem overly complicated. To me, a presentation through “double stochastic” process (as in my summary) is straight-forward and require no extra jargon.
Another avenue for improving the understanding of the reader may lie in “Quadratic loss with ridge regularisation” where results are more amendable to interpretation. The authors should have provided more intuition for those results and furthermore point out where does non-Gaussian enters in the self-consistent equations (i.e., what part is shared with the Gaussian case).
* Originality: the main part of the work focus on reproducing known results from Gaussian literature and exploring the deviation from them for the non-Gaussian case. In that sense, there is no originality in this work beyond the (impressive) achievement of providing a theory which describe this non-Gaussian case.
* Significance: the work would have been more influential if it provided new tools which can be applied to datasets, where a small number of shape parameters is fit to non-Gaussian data and the ability to classify this data can be predicted from theory and then compared to actual classification of the data.
Cases where the Gaussian case predicts the behavior for the non-Gaussian case might deserve a fuller theoretical analysis, perhaps through the analysis suggested for “Quadratic loss with ridge regularisation” above.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Why bother with the classifier estimator phi? Is there any reasonable choice beyond sign?
* Why do you refer to z* as “the matrix”?
* Are all the 8 (or 10) order parameters scalars?
* Can you clarify the interpretation of the proximal h and g? Do their distribution is a mean-field version of some real-worlds quantity?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations of their work. Those include the diagonal structure of the covariance matrix (conditional on the value of delta), the use of K=2 which leads to lack of discussion about the mean of the distribution (because they do not affect anything for K=2 beyond their norm), and the resulting theory solvable only numerically.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for her/his remarks and positive evaluation of our paper, and for capturing the spirit of our contribution very well. We are grateful to the referee for her/his suggestions about improving the clarity of the manuscript, which we implemented in the new version. We also thank her/him for suggestions regarding possible applications of the theory, in particular, in its simplest form given in the case of ridge regression: we have added a comment, in this sense, in a newly introduced *Conclusions and perspectives* section.
We list our answers to her/his questions below, hoping that they will be satisfactory.
* The referee is right about the fact that, in the given setting, the sign function is indeed the only reasonable choice, and the one we indeed adopted through the text. We decided to leave the classifier more general as, in the new version, we have extended the replica calculation to the multiclass case in the Appendix, where the classifier’s choice is less obvious.
* Thanks for pointing out the typo regarding $\boldsymbol z^\star$, which is, indeed, a vector!
* All order parameters are scalars: in the paper, we tried to consistently use bold fonts for vectorial/matricial objects (e.g., $\boldsymbol g$) and normal fonts for scalars.
* The proximals $h_\pm$ can be seen as expressing the statistics of the preactivation $\frac{\boldsymbol w^\star{}^\intercal \boldsymbol x^\nu}{\sqrt d}+b^\star$ when $\boldsymbol x^\nu$ is a training set datapoint with label $y^\nu=\pm1$. The proximal $\boldsymbol g$, instead, captures the statistics of $\boldsymbol w^\star$ itself (as expressed by Eq. 8). | Rebuttal 1:
Rebuttal: *General remarks* We thank the reviewers for their helpful feedback which helped us improve the readability and clarity of our work, and better express the importance of our contribution. To take into account their comments, we have prepared a new version of our manuscript, in which, beyond addressing the referees’ comments, we provide additional, important results, namely
* a generalisation of our result to $K$ classes (to appear in the Appendix);
* a derivation of an analytical formula for the data separability threshold;
* a derivation of an analytical formula for the Bayes-optimal error, and the comparison of the bound with the ERM results (we provide, as an example, the new Fig. 1 where the Bayes-optimal error bound is given as a dashed line for each value $a$).
We have answered the questions and addressed specific concerns of each reviewer in the individual answers below.
Pdf: /pdf/f6e22c3f92069883e97c0105e4ee9a2be0942fb7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Combating Bilateral Edge Noise for Robust Link Prediction | Accept (poster) | Summary: This paper focuses on the robustness of graph neural networks (GNNs) in the presence of edge noise during link prediction on graphs. The authors empirically investigate the impact of edge noise on both the input topology and target labels, revealing significant performance degradation and representation collapse. To address this challenge, they propose a novel principle called Robust Graph Information Bottleneck (RGIB), which leverages information theory to extract reliable supervision signals and prevent representation collapse. The paper explores two instantiations of RGIB, RGIB-SSL and RGIB-REP, which utilize self-supervised learning and data reparametrization techniques, respectively, for implicit and explicit data denoising.
Strengths: Overall, this paper makes a valuable contribution to understanding and enhancing the robustness of GNNs in link prediction tasks affected by edge noise.
The main novelty is based on systematic study of Bilateral Edge Noise including its empirical influence, visualization, and two instantiations of Robust Graph Information Bottleneck with data reparameterization and data augmentation.
This paper also provides theoretical analysis for noisy dataset convergence relaxations.
Weaknesses: This paper provides a novel method and analysis for the edge noise issue for the GNN, through the lens of information theory. The method is clearly explained, and the performance is supported with grounded evaluations with visualizations. I did not carefully check the theory proofs, but from the main paper, there is no obvious limitation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If the SSL and REP versions are parallel methods, could the authors discuss the guidelines on method selection? Are they be simultaneously used while still improve? It would be better to add a section of such discussions with supportive case evidence.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer 4XLr for the valuable feedback. We addressed all the comments. Please find the point-to-point responses below. Any further comments and discussions are welcomed!
**Q1**. *If the SSL and REP versions are parallel methods, could the authors discuss the guidelines on method selection? Are they be simultaneously used while still improve? It would be better to add a section of such discussions with supportive case evidence.*
**Reply:** We appreciate Reviewer 4XLr’s insightful comment. We would like to clarify the comparison of the two instantiations of RGIB in the following five folds.
**(1)** **From the theoretical perspective**, RGIB-SSL **explicitly** optimizes the representation $H$ with self-supervised regularizations, i.e., alignment $I(H_1; H_2)$ and uniformity $H(H_1)$, $H(H_2)$. By contrast, RGIB-REP **implicitly** optimizes $H$ by purifying the noisy $\tilde{A}$ and $\tilde{Y}$ with the reparameterization mechanism to extract clean signals in the forms of latent variables $Z_{Y}$ and $Z_{A}$. The information constraints $I(Z_{A}; \tilde{A})$, $I(Z_{Y}; \tilde{Y})$ are directly acting on $Z_{Y}$ and $Z_{A}$ and indirectly regularizing the representation $H$.
**(2)** **Besides, from the methodology perspective, both instantiations are equipped with adaptive designs for obtaining an effective information bottleneck.** RGIB-SSL utilizes the automatically augmented views $\tilde{A_1}$, $\tilde{A_2}$ in a contrastive manner to be resistant to input noise. RGIB-SSL is intrinsically robust to label noise due to its self-supervised nature. Besides, RGIB-REP explicitly purifies the input graph’s topology and target labels with the jointly reparameterized $Z_{Y}$ and $Z_{A}$. It enables to model the mutual patterns of edge noise from both input and label spaces.
In addition, we conduct a further comparison and analysis of the two instantiations that are summarized as follows.
| Instantiation | methodology | advantages | disadvantages |
| ------------- | ------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| RGIB-SSL | self-supervised learning | automated graph augmentation; good effectiveness; can be applied in entirely self-supervised settings without labels. | with expensive calculation for the contrastive objectives, especially the uniformity; requires extra graph augmentation operations. |
| RGIB-REP | data reparametrization | no needs to do data augmentation; good efficiency; the input/output constraints do not require extra annotations for supervision and can be easily controlled. | sensitive to the hyper-parameters $\lambda$; less effective in extremely noisy cases; only applicable in fully supervised settings. |
**(3)** **In addition, RGIB-SSL assumes that the learned representation can be improved with higher uniformity and alignment**. As RGIB-SSL directly acts on the graph representation, it is more suitable for recovering the distribution of representation, especially when encountering representation collapse due to the severe edge noise. By contrast, RGIB-REP explicitly purifies the input graph’s topology and target labels with the jointly reparameterized $Z_A$ and $Z_Y$. Here, the latent variables $Z_A$,$Z_Y$ are expected to be more clean and more informative than the noisy $\tilde{A}$,$\tilde{Y}$. **RGIB-REP assumes that the GNN model can identify these latent variables and further benefit its learning procedure against noise**.
**(4)** **Empirically, we observe that the two instantiations of RGIB can be generalized to different scenarios with their own priority according to the intrinsic graph properties.** RGIB-SSL is more adaptive to sparser graphs, e.g., Cora and Citeseer, where the edge noise results in severer representation collapse. RGIB-REP can be more suitable for denser graphs, e.g., Facebook and Chameleon, where the latent variables of edge data are extracted by RGIB-REP. More importantly, the two RGIB instantiations can be complementary to each other with flexible options in practical applications, and we have summarized such a point in Remark 5.1 of Section 5.1.
**(5)** **Finally, we attempt to jointly combine and use RGIB-REP and RGIB-SSL simultaneously, as you suggest.** Specifically, the "RGIB-REP+RGIB-SSL" integrates both data reparametrization technique and self-supervised learning method, and the final objective for optimization equals minimizing Equation 3 and Equation 4 simultaneously. Then, we compare "RGIB-REP+RGIB-SSL" with standard training, RGIB-REP, and RGIB-SSL based on a 4-layer GCN on all six datasets with $\epsilon=40\\%$ bilateral noise.
**As shown in the below table, the "RGIB-REP+RGIB-SSL" achieves considerable performances as RGIB-REP and RGIB-REP in most cases and outperforms these two instantiations on the Facebook dataset.** Although we believe that a more careful finetuning of the hyper-parameters will bring further improvements to the "RGIB-REP+RGIB-SSL" combination, we suggest using one of the instantiations in practice to keep the learning objective simple and sufficient to combat the bilateral edge noise. The above clarifications and evaluation results will be added to our draft.
| dataset | Cora | Citeseer | Pubmed | Facebook | Chameleon | Squirrel |
| ----------------- | ----- | -------- | ------ | -------- | --------- | -------- |
| Standard training | .7419 | .7380 | .8748 | .9520 | .9496 | .9406 |
| RGIB-REP | .7966 | .7519 | .8834 | .9770 | .9621 | .9455 |
| RGIB-SSL | .8554 | .8427 | .8918 | .9711 | .9592 | .9426 |
| RGIB-REP+RGIB-SSL | .8351 | .8270 | .8880 | .9819 | .9570 | .9431 | | Summary: This paper proposes to tackle the bilateral edge noise via mutual information. The authors start from empirical observations that existing GNNs are vulnerable to bilateral edge noises. To tackle this issue, the authors propose a robust graph information bottleneck which is information-theory guided. In practice, a self-supervised regularization and a purification mechanism are proposed. The extensive evaluation on various datasets as well as models demonstrates the effectiveness of proposed algorithm.
Strengths: 1. The paper is well-written and well-organized.
2. The introduced bilateral edge noise is an interesting and challenging problem in GNNs.
3. The proposed robust graph information bottleneck is well-motivated by both sufficient empirical observations as well as theoretical analysis.
4. The experiments are extensive, including various datasets and GNNs. Various ablation studies demonstrate the effectiveness of RGIB-SSL and RGIB-REP.
Weaknesses: I have several concerns:
1. In RGIB-SSL, the authors introduce hybrid graph augmentation which shows superiority over other contrastive learning methods. However, how the proposed augmentation outperforms other graph contrastive methods needs more clarification to provide some insights of this design.
2. It is recommended to demonstrate the robustness of the proposed algorithm under attacks, such as [4, 8, 46].
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Please clarify the motivation of hybrid graph augmentation and the highlight the contribution compared to those in [18, 47].
2. Please include the evaluation under attacks [4, 8, 46] if possible, or discuss the generalization of the proposed algorithm against adversarial attacks.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer Z7rx for the valuable feedback. We addressed all the comments. Please find the point-to-point responses below. Any further comments and discussions are welcomed!
**Q1**. *In RGIB-SSL, the authors introduce hybrid graph augmentation which shows superiority over other contrastive learning methods. However, how the proposed augmentation outperforms other graph contrastive methods needs more clarification to provide some insights of this design.*
**Reply:** We appreciate the reviewer’s insightful advice. Regarding the three information terms of RGIB-SSL in Equation 3, we elaborate a detailed clarification by comparing it with other contrastive learning methods in the following three folds.
**(1)** supervision term $I(H_1;\tilde{Y}) + I(H_2;\tilde{Y})$.
**Compared with the common manner of self-supervised contrastive learning, RGIB-SSL also considers utilizing noisy labels $\tilde{Y}$.** We show that, although being potentially noisy, the labels $\tilde{Y}$ can also benefit learning. Besides, our experiments show that both the supervised contrastive learning method (e.g., SupCon) and the self-supervised contrastive learning method (e.g., GRACE) perform poorly when learning with the bilateral edge noise. An intuitive explanation is that SupCon entirely trusts and utilizes the noisy $\tilde{Y}$ while GRACE does not adopt $\tilde{Y}$.
**From the data perspective, RGIB-SSL performs a hybrid mode of supervised and self-supervised learning.** It effectively extracts and utilizes the informative signals in $\tilde{Y}$ without entirely absorbing its noisy signals. This is achieved with the synergy of two self-supervision terms as follows.
**(2)** self-supervision alignment term $I(H_1;H_2)$.
**Here, the main differences with common contrastive learning methods are data augmentation and loss function.** Specifically, RGIB-SSL utilizes the hybrid augmentation algorithm and self-adversarial loss. Compared with the fixed augmentation manner of other contrastive learning methods [18, 47], i.e., performs random perturbation on node feature and edges, we propose a hybrid augmentation method with four augmentation operations (please refer to Appendix C for details).
The motivation here is to encourage more diverse views with lower MI $I(A_1; A_2)$ and to avoid manual selection of augmentation operations. Besides, Proposition 4.2 provides a theoretical analysis of hybrid augmentation from information-theoretical perspectives.
Empirically, compared with fixed augmentation, the hybrid brings a 3.0% average AUC promotion on Cora (please refer to Table 5). This means that a stronger augmentation scheme with more diverse views can help better deal with severer edge noise. Besides, the self-adversarial loss further enhances high-quality pairs and decreases low-quality counterparts. It refines the signal and brings up to 2.1% promotion.
**(3)** self-supervision uniformity term $H(H_1) + H(H_2)$.
As for the uniformity term, the understanding part in Section 3.2 shows that a severer edge noise brings a worse uniformity. **To learn a more robust representation, we add this uniformity term to form the final loss of RGIB-SSL and adopt the Gaussian potential kernel for implementation, which is usually not considered in other contrastive learning methods.**
The ablation studies in Section 5.2 also illustrate that the uniformity term is essential, especially in dealing with label noise. Besides, the uniformity of learned representation is also enhanced (see Figure 6), and the various query edges tend to be more uniformly distributed on the unit circle, especially for the negative edges.
In the revision, we will refine and highlight these explanations for the hybrid graph augmentation.
**Q2**. *It is recommended to demonstrate the robustness of the proposed algorithm under attacks, such as [4, 8, 46].*
**Reply:** Thanks for the nice suggestion. As an adversarial attack, the Nettack [49] is considered in all the three works [4, 8, 46] you mentioned. In Appendix D.4 of our work, we conduct experiments based on Nettack that perturbs the graph structure, which is the same as you suggested.
As the Test AUC shown in the tables below, the adversarial attack that adds noisy edges to the input graph also significantly degenerates the GNN's performance. **Crucially, it is observed that RGIB-SSL and RGIB-REP can also promote the robustness of GNN against adversarial attacks on graph structure.** RGIB-SSL and RGIB-REP respectively achieve $4.1\\%$ and $5.5\\%$ improvements of Test AUC on the Cora dataset with $\epsilon_{adv}=20\\%$ adversarial perturbations, demonstrating the robustness of RGIB against adversarial attacks.
Besides, the broader impact of this work and the general robustness of GNNs are also discussed in Appendix. B.4 and B.5.
| Cora dataset (Table 11) | clean | $\epsilon_{adv}=20\\%$ | $\epsilon_{adv}=40\\%$ | $\epsilon_{adv}=60\\%$ |
| ----------------------- | --------- | --------------------- | --------------------- | --------------------- |
| standard training | .8686 | .7971 | .7671 | .7014 |
| RGIB-SSL | **.9260** | .8296 | **.8095** | **.8052** |
| RGIB-REP | .8758 | **.8408** | .7918 | .7611 |
| Citeseer dataset (Table 12) | clean | $\epsilon_{adv}=20\\%$ | $\epsilon_{adv}=40\\%$ | $\epsilon_{adv}=60\\%$ |
| --------------------------- | --------- | --------------------- | --------------------- | --------------------- |
| standard training | .8317 | .8139 | .7736 | .7481 |
| RGIB-SSL | **.9148** | **.8656** | **.8347** | **.8022** |
| RGIB-REP | .8415 | .8382 | .8107 | .7893 | | Summary: This paper focuses on the robustness of GNNs under the edge noise. The authors disclose the influence of bilateral edge noise and the corresponding robustness issue via a series of empirical studies on edge noise. Based on the observations of bilateral noise, the authors propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB) and its two instantiations, RGIB-SSL and RGIB-REP. Experimental results verify the effectiveness of RGIB instantiations.
Strengths: - The paper is well written and structured.
- The emprical studies and relative experiments are clear and convincing.
- Experiments are comprehensive and complete.
- The proposed RGIB principle is simple yet effective.
Weaknesses: - There is a lack of more comprehensive studies on other graph representation learning tasks, such as node classification.
- RGIB is proposed under the assumption of edge noise, however the motivation statement is not very convincing. The relationship between GIB and bilateral noise should be further explained.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I wonder how RGIB improves other graph representation learning tasks, like node classification?
- Why the model performance on original graphs are not reported? Since the graphs without additional noises are not considered as completely clean, RGIB should be able to provide performance gain as well.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer cjox for the valuable feedback. We addressed all the comments. Please find the point-to-point responses below. Any further comments and discussions are welcomed!
**Q1**. *There is a lack of more comprehensive studies on other graph representation learning tasks, such as node classification. I wonder how RGIB improves other graph representation learning tasks, like node classification?*
**Reply:** The answer is yes. Actually, we have conducted experiments on the node classification task with random label noise on nodes, which is the same as you suggested. Please refer to Appendix D.6 for details. **As shown in below Tables 15 and 16, we justify that the RGIB framework can generalize to the node classification tasks with label noise on nodes, where the two instantiations of RGIB also significantly outperform the standard training manner.**
Besides, please refer to the broader impact and the general robustness of GNNs discussed in the Appendix. B.4 and B.5, respectively. These contents are also relevant to your question.
| Cora (Table 15) | clean | $\epsilon=20\\%$ | $\epsilon=40\\%$ | $\epsilon=60\\%$ |
| ----------------- | -------- | --------------- | --------------- | --------------- |
| standard training | **.898** | .868 | .720 | .322 |
| RGIB-SSL | .900 | **.876** | **.786** | **.388** |
| RGIB-REP | .894 | .862 | .760 | .312 |
| Citeseer (Table 16) | clean | $\epsilon=20\\%$ | $\epsilon=40\\%$ | $\epsilon=60\\%$ |
| ------------------- | -------- | --------------- | --------------- | --------------- |
| standard training | .776 | .746 | .608 | .278 |
| RGIB-SSL | **.784** | **.770** | .646 | .324 |
| RGIB-REP | .776 | .754 | **.654** | **.364** |
**Q2**. *RGIB is proposed under the assumption of edge noise, however the motivation statement is not very convincing. The relationship between GIB and bilateral noise should be further explained.*
**Reply:** We would like to further explain the motivation of RGIB and the relationship between GIB and bilateral noise in the following three folds.
**(1)** **Conceptually, we would clarify that GIB (Equation 1) is intrinsically susceptive to label noise since it entirely preserves the label supervision with maximizing $I(H;\tilde{Y})$.** As illustrated in Figure 2, the GIB decreases $I(H;\tilde{A}|\tilde{Y})$ by directly constraining $I(H;\tilde{A})$ to handle the input noise. Symmetrically, the label noise can be hidden in the area of $I(H; \tilde{Y} | \tilde{A})$, but trivially constraining $I(H; \tilde{Y})$ to regularize $I(H;\tilde{Y} | \tilde{A})$ is not ideal, since it will conflict with Equation 1. Besides, it cannot tackle the noise within $I(\tilde{A}; \tilde{Y})$, where the two kinds of noise can share similar patterns as the random split manner does not change their distributions in expectation. Thus, GIB cannot provide an ideal solution to the bilateral edge noise investigated in this work.
**(2)** **Thus, it is crucial to further decouple the mutual dependence among $\tilde{A}$, $\tilde{Y}$, and $H$.** Based on the detailed analysis elaborated in Section 4.1 and Appendix B.2, we derive the RGIB principle that balances the three important information terms $H({H})$, $I( H; \tilde{Y} | \tilde{A})$ and $I( H; \tilde{A} | \tilde{Y})$. It works as an information bottleneck to filter out the noisy signals in both $\tilde{A}$ and $\tilde{Y}$, utilizing the supervision signals $I( H; \tilde{Y})$ at the same time. Analytically, GIB only indirectly regularizes the MI term $I( H;\tilde{A}|\tilde{Y})$, as we introduce section 4.1, which can only solve partial noisy information. By contrast, RGIB takes all the related MI terms, i.e., $H({H})$, $I( H; \tilde{Y} | \tilde{A})$, and $I( H; \tilde{A} | \tilde{Y})$, based on which to provide a solution that balances these MI terms as a more strict information bottleneck.
**(3)** **We provide two instantiations for implementing the RGIB principle, i.e., RGIB-SSL and RGIB-REP.** These two instantiations benefit from different methodologies, i.e., self-supervised learning and data reparametrization, for implicit and explicit data denoising, respectively. Note that these two methodologies are not considered in the original GIB. Besides, the GIB is highly coupled with the GAT. By contrast, RGIB does not require any modifications to GNN architecture. It can be seamlessly integrated with various GNNs and promote their robustness against bilateral noise.
In a nutshell, RGIB generalizes the GIB with improvements in both theories and methodologies to learn a robust representation more resistant to the bilateral edge noise. We will follow the reviewer's advice by refining the corresponding description to make it clearer.
**Q3**. *Why the model performance on original graphs are not reported? Since the graphs without additional noises are not considered as completely clean, RGIB should be able to provide performance gain as well.*
**Reply:** We have conducted this supplement experiment that evaluates all the baseline methods on clean datasets in Appendix E.1. As the below Table selected from Table 17, **the proposed two instantiations of RGIB can also boost the predicting performance when learning on clean graphs and significantly outperform other baselines in most cases.**
| Table 17 | Cora | Citeseer | Pubmed | Facebook | Chameleon | Squirrel |
| ----------------- | --------- | --------- | --------- | --------- | --------- | --------- |
| standard training | .8686 | .8317 | .9178 | .9870 | .9788 | .**9725** |
| RGIB-SSL | .8758 | .8415 | .9408 | **.9875** | **.9792** | .9680 |
| RGIB-REP | **.9260** | **.9148** | **.9593** | .9845 | .9740 | .9646 |
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed clarifications and experimental results. I have no further comment.
---
Reply to Comment 1.1.1:
Title: Many thanks for your positive support and constructive comments!
Comment: Hi Reviewer cjox,
Thank you so much for your comments and appreciation! We really value your constructive feedback, as it helps us improve our work. We will carefully incorporate the discussions and experiments into our submission.
Please feel free to interact with us if you have any further questions. | Summary: The authors extend the Graph Information Bottleneck (GIB) to "bilateral" structural noise and label noise. That is, both the adjacency matrix and the labels are being randomly perturbed. The authors observe that the bilateral noise leads to "poorer alignment and a worse uniformity". To handle this noise, the authors decompose the term used in GIB and, due to its intractability, propose two efficient, practical instantiations.
Strengths: 1. The authors propose two methods that significantly and consistently outperform the baselines
1. The approach is principled due to its roots in the powerful concept of information bottleneck
1. The experiments are extensive and are convincing that the method has some merit for certain applications
The paper is generally well-written and logically structured.
Weaknesses: 1. The method seems to rely heavily on the assumption that the node features are clean. It would be good to study how sensitive the model is to feature noise.
1. The authors could elaborate more on the assumptions etc., that make their instantiations tractable and when these assumptions are met.
Minor: margins between lines 215 & 216 seem violated.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Would it make sense / be possible also to evaluate the method using an attack that optimizes for $\tilde{A}$ or $\tilde{Y}$, e.g., using first-order optimization?
1. As you do not provide code, I wonder if the clean labels/adjacency are used in any way during training?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors should elaborate more and more prominently on what the computational requirements of their method are and how it compares to the other baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer 4Q8x for the valuable feedback. We addressed all the comments. Please find the point-to-point responses below. Any further comments and discussions are welcomed!
**Q1**. *The method seems to rely heavily on the assumption that the node features are clean. It would be good to study how sensitive the model is to feature noise.*
**Reply:** We appreciate the reviewer's question about the node feature. **Although our RGIB methods focus on edge noise, we conduct the following experiments to verify the point.** Specifically, we compare standard training, RGIB-REP, and RGIB-SSL using a 4-layer GCN on six datasets with various ratios of feature noise. Here, the noisy feature is generated by adding random noise within the range of $[0,1]$ to the normalized node feature.
The evaluation results of mean AUC are reported in the additional one-page PDF file of our general response.
As can be seen, the noisy node feature also significantly degenerates the GNN's performance, and the degradation becomes severer as the noise ratio increases. **Interestingly, compared with the standard training manner, RGIB-SSL and RGIB-REP can also promote the robustness of GNN against feature noise.** For example, when learning with $\epsilon_f=10\\%$ feature noise, RGIB-SSL can bring $16.6\\%$ and $6.5\\%$ improvements in AUC on Cora and Citeseer datasets, respectively.
We speculate that as the graph representation $H$ is encoded from the node feature $X$, regularizing $H$ in RGIB objectives can also balance its dependence on $X$ and thus shows some potential robustness, even though we originally designed these objectives to handle edge noise.
**The above experiments show that our RGIB principle also has some merits in combating feature noise.** We should note that this might be an initial verification, and a more comprehensive study will be conducted to have a rigorous conclusion in future explorations. We will add the above discussions and evaluation results to the submission.
**Q2**. *The authors could elaborate more on the assumptions etc., that make their instantiations tractable and when these assumptions are met.*
**Reply:** Thanks for the constructive advice. The primary assumption of our work is that the edges of collected graph data can be potentially noisy. As in the submission, we present two ways of realizing the RGIB principle, which actually corresponds to some assumptions in building the objectives with the proper approximation.
**RGIB-SSL assumes that the learned representation can be improved with higher uniformity and alignment.** As RGIB-SSL directly acts on the graph representation, it is more suitable for recovering the distribution of representation, especially when encountering representation collapse due to the severe edge noise.
RGIB-REP explicitly purifies the input graph’s topology and target labels with the jointly reparameterized $Z_A$ and $Z_Y$. Here, the latent variables $Z_A$,$Z_Y$ are expected to be more clean and informative than the noisy $\tilde{A}$,$\tilde{Y}$. **RGIB-REP assumes that the GNN model can identify these latent variables and further benefit its learning procedure against noise.**
Empirically, RGIB-SSL is more adaptive to sparser graphs, e.g., Cora and Citeseer, where the edge noise results in severer representation collapse. RGIB-REP can be more suitable for denser graphs, e.g., Facebook and Chameleon, where the latent variables of edge data are extracted by RGIB-REP. More importantly, the two RGIB instantiations be complementary to each other with flexible options in practical applications.
**Q3**. *Margins between lines 215 & 216 seem violated.*
**Reply:** Thanks for this comment. The small margin is due to the automatic typesetting of the LaTex compiler, and we did not manually modify this margin in our draft. We will rearrange this part to be clearer.
**Q4**. *Would it make sense / be possible also to evaluate the method using an attack that optimizes for $\tilde{A}$ or $\tilde{Y}$, e.g., using first-order optimization?*
**Reply:** Yes, it is reasonable. In Appendix D.4, we conduct the adversarial attacks on $\tilde{A}$, which is the same as you suggested. As shown in Tables 11 and 12, the adversarial attack that adds noisy edges to the input graph also significantly degenerates the GNN's performance. **Importantly, it is observed that the two instantiations of RGIB can also promote the robustness of GNN against adversarial attacks on graph structure.** RGIB-SSL and RGIB-REP achieve $4.1\\%$ and $5.5\\%$ improvements in Test AUC on Cora dataset with $\epsilon_{adv}=20\\%$ adversarial perturbations.
**Q5**. *As you do not provide code, I wonder if the clean labels/adjacency are used in any way during training?*
**Reply:** We would like to kindly point out that we have provided an anonymous link to our source code. Please refer to line 732 in Appendix C.2.
In addition, any clean labels and adjacency are not used in training. On the contrary, the model directly learns from the noisy adjacency $\tilde{A}$ and label $\tilde{Y}$, which is practical as the collected data is potentially noisy in real-world applications.
**Q6**. *The authors should elaborate more and more prominently on what the computational requirements of their method are and how it compares to the other baselines.*
**Reply:** Thanks for the advice. We provide a detailed explanation in the following two folds.
**(1) Noise information.** RGIB and all the baselines are run without any noise priors, e.g., noise type or noise ratio. The only required information for training is adjacency $\tilde{A}$, node feature $X$, and edge labels $\tilde{Y}$, not including any additional heuristics or assumptions.
**(2) Training time.** Further, we evaluate the effectiveness and efficiency of the proposed methods on two large-scale datasets with bilateral noise. As shown in Table 14 of Appendix D.5, the extra computing costs of RGIB are within an acceptable range.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: All my comments have been addressed.
I think it would help the presentation if the adversarial attack experiments from Appendix D.4 were moved to the main part in a revision of the submitted paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your confirmation and the further suggestion. We will move the experiments about adversarial attack in Appendix D.4 to the main part in the revision, and all other advices will be considered and followed to improve the corresponding parts.
Best,
The author of submission4728 | Rebuttal 1:
Rebuttal: ### A General Response by Authors
**We would like to thank all the reviewers for their valuable comments on our work.**
**We have received five reviews with positive ratings 7,6,7,7,7. We appreciate that all the reviewers have good impressions on our work**, including **(1)** interesting problem and powerful solution (Fboz, 4Q8x, cjox, Z7rx, 4XLr); **(2)** comprehensive and convincing experiments (Fboz, 4Q8x, cjox, Z7rx, 4XLr); **(3)** sufficient theoretical supports (Fboz, Z7rx, 4XLr); and **(4)** well-written and good presentation (Fboz, 4Q8x, cjox, Z7rx).
**In the rebuttal period, we have provided detailed responses to all the comments and questions point-by-point.** Specifically, we further clarify the assumption (Q2 for 4Q8x), motivation (Q2 for cjox), method (Q1 for Z7rx; Q1 for 4XLr), extension scenarios (Q4 for 4Q8x; Q1,Q3 for cjox; Q2 for Z7rx) and training details (Q5,Q6 for 4Q8x) of our work. Besides, we add new empirical evaluations with the extension on feature noise (Q1 for 4Q8x) and the integration of the two instantiations of RGIB (Q1 for 4XLr). The attached one-page PDF file contains the evaluations on feature noise (Q1 for 4Q8x).
Lastly, we would appreciate all reviewers’ time again. Would you mind checking our response and confirming whether you have any further questions? **We are anticipating your post-rebuttal feedback!**
Pdf: /pdf/ddd48373f09ea05cbe498200579b4079018de6d4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper tackles the challenge of link prediction on graphs in the presence of edge noise, a topic that has seen little exploration despite the advancements in graph neural networks (GNNs). Through an empirical study, the authors reveal that edge noise can adversely affect both input topology and target labels, leading to performance degradation and representation collapse. In response, the paper introduces an information-theory-guided principle named Robust Graph Information Bottleneck (RGIB), which aims to extract reliable supervision signals and prevent representation collapse. RGIB achieves this by decoupling and balancing the mutual dependencies among graph topology, target labels, and representation, which creates new learning objectives for building robust representations in the face of bilateral noise. The authors present two specific implementations of RGIB, namely RGIB-SSL (which employs self-supervised learning) and RGIB-REP (which uses data reparametrization), for implicit and explicit data denoising respectively. The effectiveness of the proposed RGIB methods is validated through extensive experiments on six datasets and three GNNs under various noisy conditions.
Strengths: The paper is well-written with clear motivation and structure. Overall interesting problem; good mathematical exposition; solid theoretical results and analysis. This paper has a very comprehensive experimental analysis including the experiments in the appendix.
Weaknesses: This paper is overall good.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Most of my concerns are addressed in the appendix. I have no further questions or suggestions for this work.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Board impact is discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Fboz for the valuable feedback and the positive support of our work.
Any further comments and discussions are welcomed! | null | null | null | null | null | null |
TRIAGE: Characterizing and auditing training data for improved regression | Accept (poster) | Summary: This paper present a data characterization method for regression task. The method leverages conformal predictive systems literature and proposed to estimate training data scores by thresholding percentile of calibration data points given their conformity measure. The method is interesting in terms of leveraging calibration dataset CPD to measure the training data score. The threshold of grouping (4.3) is less explained, but empirical results show it works. The paper is self-claimed to be the first data characterization method that is suitable for regression task.
Strengths: The proposed method is novel to me. Leveraging calebration dataset to re-score training data is interesting. I am quite curious how the choice of calibration dataset would impact the method's performance.
The paper is very well motivated. And, maybe over motivated, considering justification and explanation missing in the section 4.2 and 4.3.
The experimental results show the proposed method works well in benchmark datasets and could be a good tool to do data selection or feature selection.
Weaknesses: The paper takes pretty long paragraph to highlights its novelty and difference between other data characterization method, which looks like an ads. The motivations part should be reduced and focus more on algorithm explanation. There is very less justification on why thresholding CPD works for the data characterization purpose, and where the rules in Eq 3 come from. They need further justification and explanation.
Algorithm 1 need to further adjustment and state what the "eval sample" is. I think it is one training data in D_train. The output of Algorithm should be a matrix |X| x |M| right? please state it somewhere to help understanding.
Design of 4.2 is not explained. Why mean over training steps? What if we put more weights to higher iteration? What if there are multiple candidate regression methods? would this alter the outcome?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why mean over training steps when computing C(x,y)? What if we put more weights to higher iteration?
What if there are multiple candidate regression methods? would this alter the outcome?
Why does the group assignment threshold designed as Eq 3? Any explanation or justification?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No concern. But the authors should describe further in terms of mis using this method, where decision maker can remove data from underrepresented population based on the solution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear ``R-N4d8``.
Thank you for your thoughtful comments which have helped improve the paper. We provide answers (A)-(E) & highlight updates to the paper
# (A) Evaluation over all training steps vs looking at higher iterations [Design motivation]
TRIAGE aims to analyze the behavior of different samples through the training process. Specifically, as shown in **Fig. 9 (Appendix A)**, while at the end of training, samples can converge to the same score — the trajectories of TRIAGE scores (training dynamics) are different for different samples. Some samples converge quickly, while others might take longer and oscillate more. We aim to use these differences to delineate samples. As shown in Fig. 9, most of the variability arises in the earlier training steps, which is why we chose to compute the score over all steps to capture this early behavior. While we discuss this in Appendix A.4, we will further update it to be clearer and reference this point in the main manuscript as suggested.
**Experiment:** Beyond explanations, we also thank the reviewer for bringing up this interesting question. It has spurred us to conduct a new experiment to provide additional validation for why we want to evaluate over the entire training trajectory rather than focusing on later/higher iterations.
We specifically have repeated the data sculpting experiment (original results in Table 2). We evaluate computing the TRIAGE score starting at later and later points in the training trajectory after: (i) 33% of training, (ii) 50% of training, (iii) 70% of training and (iv) 80% of training. In contrast to our version in the manuscript, which we will term TRIAGE (Base) is computed from the beginning of training starting at 0% of training). We then compute the TRIAGE scores for each variant (i)-(iv). We then select samples for sculpting and evaluate test MSE for the regressor after sculpting.
We find an increase in MSE (worse performance) the further in the trajectory that we start the score computation. The results averaged over all calibration sizes are shown in **Fig.3 in the uploaded response PDF** and are reflected as MSE increase vs TRIAGE (Base). This highlights the importance of capturing the variability between sample training dynamics early in training before they reach steady state, to differentiate them. We thank the reviewer for the suggestion, as it helps further motivate our use of training dynamics & computing the mean over the whole trajectory.
**UPDATE:** Refine the discussion in Appendix A (referring to it in the main paper) and include the new result in Appendix C.
# (B) Multiple candidate regressors
We evaluate the performance of TRIAGE and specifically the stability of the TRIAGE scores given differently parameterized regressors. Our result in **Fig. 4** highlights that the TRIAGE scores are more consistent compared to baselines — with a Spearman correlation of 0.91. The consistent scores would mean the outcomes of tasks using the scores would be similar — i.e. we have similar outcomes. We have also conducted a similar analysis across different types of regressors in **Appendix C.2 (Table 5)**. We will add a discussion on this in the main manuscript and link it better to bring this result to the fore.
# (C) Adjustments and Clarifications
Thank you for catching the ambiguity in Algorithm 1. We will update it to be explicit that an evaluation sample is indeed a sample from $D_{train}$. On the output dimensions, yes after all samples are computed, we have an $M$ x $q$ matrix representative of the different CPDs. We then note when we compute the TRIAGE score itself, the dimensionality is $M$ x $E$, where $E$ are epochs.
**UPDATE**: We will update the text with these clarifications and notation updates.
# (D) Streamline Introduction
Thank you for the suggestion. We will streamline the introduction to make space for further discussion on the algorithm. Specific to your suggestion to include further motivation on the thresholding and curve — this additional space (along with the additional camera ready page) will allow us to bring the algorithmic motivations from Appendix A.5 and Appendix A.6 into the main paper.
# (E) Add discussion on potentially removing under-represented populations
We agree that this is an important consideration, especially around data sculpting when there is a data imbalance between majority and minority groups. We have evaluated this setting in **Appendix C.6.1**. We apologize that this assessment and discussion got lost in the numerous appendices. We will endeavor to better flag it beyond L330-331. Based on your suggestion, we will include it in the discussion Sec.6. To summarize the experimental result in Appendix C.6.1.
We show under this imbalance scenario that TRIAGE still retains strong performance (with the lowest MSE) on the minority group — compared to baselines. This is because of the calibration set containing these a few minority samples, permitting TRIAGE not to filter them and hence retain good performance. This finding highlights an important aspect of calibration set construction to be as representative as possible. We will include a discussion on this as guidance to users of TRIAGE to promote safe and responsible usage.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for the clarification. I would keep my current review with higher confidence.
---
Reply to Comment 1.1.1:
Comment: Dear ``R-N4d8``
Thank you for your feedback and suggestions that helped us strengthen the paper!
Regards
Paper 11604 Authors | Summary: The task of data characterization aims to address variations in individual-level performance despite achieving good average performance. Existing methodologies have predominantly focused on classification, leaving a gap in data characterization approaches for regression. In this paper, the authors propose the TRIAGE framework to bridge this gap. Extensive experiments demonstrate the framework's superior performance in regression tasks.
Strengths: - The paper is well-written.
- This is the first paper to introduce a principled data characterization framework in regression settings, supported by extensive experiments.
Weaknesses: Many contents are in the appendix. It would be helpful more discussions about the appendix in the main text.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In simplified setting (e.g., linear (kernel) regression?), is it possible to derive further theoretical guarantees on the behavior of C and V, defined in Section 4.2?
- Is the group assignment consistent? (in the sense that under what conditions are the samples accurately assigned their true group label?)
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: refer to weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear ``R-pNo8``.
Thank you for your thoughtful comments which have helped improve the paper. We provide answers (A)-(C) & highlight updates to the paper
# (A) Many appendices - include discussion in the main text
Thank you for suggesting better flagging our numerous appendices' contents.
**UPDATE:** In addition to the references interspersed in the text, in the additional camera ready page we will include a section outlining new discussions and results covered in the Appendix.
# (B) Consistency of TRIAGE
We assess the consistency of the scores computed from the training dynamics for different parameterized models in Fig.4 (Sec 5.1). By virtue of consistency of the scores, this would lead to consistency of group assignment since we use a threshold based mechanism to assign the groups. We find that TRIAGE is much more consistent than the baselines with a Spearman rank correlation or 0.91 averaged across datasets for different parameterized models.
Additionally, in **Appendix C.2**, we further assess consistency using different model classes — namely XGBoost and CatBoost. We apologize that this was unclear linked to your first point about many contents in the Appendix. We will flag this additional set of results in Appendix C.2 in the main manuscript as an additional consistency assessment. The Spearman rank correlation of scores between these different models is similarly high, as shown in Table 5 with a mean of 0.88 +− 0.04. Given the consistency and stability of the scores, this would mean the subsequent assigned groups would be stable. We visualize this in Fig. 13 (Appendix C.2), which illustrates the stability and consistency of the TRIAGE characteristic curves.
**UPDATE:** Flag the additional results in Appendix C.2 around consistency in the main manuscript in Sec 5.1 and in the dedicated discussion on contents in the Appendix.
# (C) Theoretical guarantees
We agree that providing theoretical guarantees would be valuable beyond just the strong empirical performance of TRIAGE. We wish to highlight two important aspects which make this process challenging:
- Since there is an interdependence of CPD and training dynamics in TRIAGE. Hence, disentangling their impacts theoretically is non-trivial.
- Scores across training epochs are correlated due to iterative model training. This highlights the complexity of any theoretical guarantee, given the dynamic nature of the scores and the correlation of scores. Especially, since independence is often assumed when providing theoretical results — whereas we cannot assume this with a training dynamics based perspective.
**UPDATE:** Given the complexities, we propose to discuss a theoretical guarantee as future work in Sec.6. We will also add an Appendix outlining the aforementioned challenges.
---
Rebuttal Comment 1.1:
Comment: I appreciate your thoughtful response. I would prefer to maintain my current score.
---
Reply to Comment 1.1.1:
Comment: Dear ``R-pNo8``
Thank you! And thanks again for your time and positive feedback.
Regards
Paper 11604 Authors | Summary: The authors introduce a new data characterization framework, TRIAGE, for regression models. The method utilizes conformal predictive distributions to compute the training examples' scores. To compute TRIAGE scores, the authors use predictive distributions and conformal prediction. A proper training set is used to train a regressor, and a separate calibration set is used for conformal calibration. Conformity measures the dataset's agreement with the observation. Consequently, the conformal predictive scores are computed at each epoch for each training point. Afterward, TRIAGE measures the mean and standard deviation for each training point. The method analyzes each example's training dynamics at each epoch to group each point into one of the groups: under/over/well estimated by the model based on the thresholds. TRIAGE can reduce the number of samples to train the regressor compared with the baseline methods. Specifically, it is observed that the MSE performance is improved with the selected data. It holds an advantage steadily over four different scoring methods.
Strengths: + The authors propose a novel data characterization method for any regression models that analyze the training dynamic of each training point. Using conformal predictive systems for this problem is intuitive and effective.
+ TRIAGE effectively reduces the number of samples to train regressors compared with the baseline methods. It shows that the MSE performance is improved with data keeping and holds an advantage for four different scoring methods.
+ The paper is well written.
Weaknesses: + Time complexity can be enormous for large neural networks to compute scores over multiple epochs.
+ Only simple datasets and simple regressors are performed. The data sculpting experiment is also limited to 500 available samples.
+ Some more challenging points might be important in the medical field, but if TRIAGE discards those samples, the model would not learn the critical medical case.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: + How much does the calibration dataset affect the data sculpting performance?
+ How exactly are distances computed in KNN for residuals of the calibration dataset? Does it use some embedding space?
+ What exactly do authors mean by model-agnostic? It seems that conformity scores are computed on residuals which are based on trained regression model predictions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Dear ``R-pEXt``.
Thank you for your thoughtful comments which have helped improve the paper. We provide answers (A)-(E) & highlight updates to the paper
# (A) Computational time
Thank you for bringing up this point. We agree that analyzing the time cost to compute TRIAGE scores is important. We have run a **new experiment** where we vary the dataset size from *2000-100k* samples and assess the TRIAGE score computational time. Naturally, as the dataset size increases, so does computational time. That being said, even at 100k samples computing the TRIAGE scores for all samples takes <2min, highlighting the time efficiency and capability of TRIAGE to scale to large data sample sizes. We show the results in **Fig. 2 of the response pdf**.
**UPDATE:** Add the new result as a new Appendix, as an important analysis of the time cost of TRIAGE.
# (B) Clarifying data sculpting experiment sample size (“limited to 500 samples”)
We wish to clarify the sample size (Sec 5.2.2). The limited sample size up to 500 samples is just for the $D_{cal}$ (calibration dataset) in order to demonstrate that we only require a few calibration samples. The training set ($D_{train}$) which TRIAGE audits and sculpts, is the SEER medical dataset from the US which has a large sample size of **20k samples**. We apologize if this was unclear.
**UPDATE:** We will update Sec 5.2.2 to include the sample size of the SEER dataset to make it clear that we audit and sculpt a large dataset and that small samples are confined to $D_{cal}$.
# (C) Clarifying simple datasets & regressors
We wish to clarify that the datasets used are not simple but rather real-world datasets. As mentioned on L212-219 & Table 4 (Appendix B.2), our *10 datasets span sample sizes (500-100k) & dimensionality (8-85)*. In addition, the medical datasets used are reflective of actual regression settings. For instance, (i) SEER [25] and CUTRACT are Prostate cancer datasets from US and UK hospitals, (ii) Hospital Length of Stay [27] and (iii) MIMIC Antibiotics [28] are also real-world medical datasets.
We also study a variety of powerful regressors, including neural networks, XGBoost and CatBoost. These regressors are likely to be used in practice on these large and high-dimensional datasets.
**UPDATE:** We will clarify these two aspects at the start of Sec. 5.
# (D) Discarding critical samples (e.g. medical domain)
We appreciate the important consideration around data sculpting, especially in medical domains where minority sized groups are often critical samples. Our evaluation on precisely this issue can be found in **Appendix C.6.1**. We realize that our mention of L330-331 may not have drawn enough attention to it. To summarize, we show the prostate cancer setting where younger patients are a minority-sized group. In this imbalance scenario, TRIAGE still retains strong performance on the minority-sized group (with the lowest MSE) compared to baselines. The reason is that the calibration set contains a few minority samples, ensuring they aren't overlooked and discarded by TRIAGE. This underscores the importance of a representative calibration set for capturing critical cases. We will include a discussion on this as guidance to users of TRIAGE to promote safe and responsible usage.
**UPDATE:** Better flag our experiment in Appendix C.6.1 (on data imbalance) & include a discussion in Sec 6 around constructing a calibration set for safe usage of TRIAGE.
# (E) Clarifications
*(i) Clarifying the KNN in TRIAGE:*
We compute the errors (residuals) for all samples in $D_{cal}$. For sample $x_i$ we estimate its difficulty score ($\sigma$) as the mean absolute errors of $x_i$’s k-nearest neighbors in $D_{cal}$, where K=5
*(ii) Clarifying the term model-agnostic:*
By stating that Conformal Predictive Distributions offer a "model-agnostic" score, we mean that the score can be computed using any regressor. While the reviewer rightly points out that the core is to compute conformity scores (i.e., residuals), these scores can be derived post-hoc from any regressor, requiring only its output predictions. This "model-agnostic" approach contrasts methods like Bayesian ones, which necessitate specific modeling assumptions, thereby limiting their applicability across different regressors (see L130-132).
**UPDATE:** Explain in the main manuscript that by model-agnostic, we mean the conformity scores can be computed for any regressor just needing the outputs.
*(iii) Effect of the calibration dataset:*
The calibration dataset is important to data sculpting as discussed in point (D). i.e. performance on minority sized groups. We have investigated another aspect **Appendix C.5**: how $D_{cal}$ affects the “validity/calibration”, especially if we violate the exchangeability assumption.
- Appendix C.5.1 shows we have valid CPDs if exchangeability is satisfied; hence will provide good sculpting.
- Appendix C.5.2 then shows the effect as $D_{cal}$ gets more non-exchangeable. We find that in the non-exchangeable setting when $D_{cal}$ is small e.g. <300 samples we still empirically achieve coverage and have high quality CPDs based on CRPS score & calibration curves. Moreover, we have good MSE sculpting performance, as shown in Table 2. That said, above 0.3 (> 300 samples) we have sufficient samples that violate exchangeability. This reduces the coverage below 0.9. Interestingly, this matches the change-over point in Table 2, where training directly on $D_{cal}$ will lead to better performance than sculpting. The difference though is not significant, with the MSE not drastically harmed. Additionally, our CPDs are still of high quality, based on the low CRPS score.
**UPDATE:** For clarity, we'll revise our reference on L337-339 to explicitly mention that Appendix C.5 evaluates the effects of $D_{cal}$.
---
Rebuttal Comment 1.1:
Comment: I appreciate authors for their great effort for responses.
I have two small questions:
+ Why does Figure 16 shows the calibration size only up to 0.5?
+ I believe the neural network investigated by the authors is the Bayesian NN. What is the architecture of that network and how was it trained?
---
Reply to Comment 1.1.1:
Title: Clarifications
Comment: Dear ``R-pEXt``
Thank you for your feedback — we are glad our response has helped to address your comments. We clarify your two questions below.
(1) Figure 16 calibration size: To clarify, this experiment corresponds to the calibration sample sizes of Table 2 in the main paper, where $D_{cal}$ sizes range from 10-500 samples. For instance, as mentioned on L929: 0.3 (300 samples). In Table 2, we show that at 500 samples (corresponding to 0.5 in Fig 16), that training on $D_{cal}$ directly is better than sculpting $D_{train}$. i.e. TRIAGE sculpting was beneficial at small sample sizes of $D_{cal}$. Thus, we capped the x-axis of Fig 16 to match Table 2. To prevent confusion, we will adjust Figure 16's x-axis to display the raw calibration size, aligning it better with Table 2.
(2) Neural network clarification: To clarify, the BNN is only used for the baseline “BNN sculpt” --- comparing prediction-based sculpting with uncertainty to TRIAGE sculpting. The BNN is a MLP regressor where we learn a distribution over the weights using variational inference, in the same way as [R1] --- allowing integration of uncertainty. In contrast, when using TRIAGE with an MLP, we *do not* learn a distribution over the weights. Instead, TRIAGE wraps a conventional MLP regressor that uses point estimate weights. The MLP architecture however is the same between the BNN and the TRIAGE MLP.
We are grateful for the reviewer's time and suggestions, which have strengthened the paper. We hope these clarifications address your questions.
Paper 11604 Authors
[R1] Ghosh, Soumya, Jiayu Yao, and Finale Doshi-Velez. “Structured variational learning of Bayesian neural networks with horseshoe priors.” International Conference on Machine Learning. PMLR, 2018. | Summary: The problem studied in this work is the following: Given a dataset $\lbrace (x_i, y_i ) \rbrace_{i=1}^M$ and a regressor $f_\theta$ trained on this dataset, assign a group label $g_i$ to each sample that specifies whether the regressor under or overestimates on the sample. Such group labels can be used to identify and remove outliers in the dataset. The key formula of the proposed method is Eqn. (1), which is similar to Eqn. (5) in [23] that takes into account the "prediction accuracy" over each sample. The authors proposed to first put each sample into one of the several bins defined on a calibration set, and then assign a CPD score accordingly. The group label is then assigned using the CPD score as well as its variance. In the experiments, the authors use these group labels to remove outliers in the training set, and observe that it can improve the regression performance for a variety of regressors.
Strengths: - The problem studied in this work is very important for real applications since most datasets contain outliers.
- The desiderata for data characterization, as listed in (P1)-(P4) after line 44, are very clear and define the overall goal of this work.
- The experimental results presented in Section 5 support that the proposed method satisfies the desiderata.
Weaknesses: - The overall presentation is not good enough and there are many confusing points in this work, which I will discuss in the Questions section.
- The authors claim TRIAGE to be "the first data characterization framework tailored to regression settings" (lines 63-64), yet in [23] cited in this work, a similar approach for regression conformal prediction was proposed. In fact, the proposed method in this work largely resembles the method in [23], including the CPD score definition Eqn. (1) (versus Eqn. (5) in [23]), the use of calibration scores $\lbrace \alpha_1,\cdots,\alpha_q \rbrace$ (versus Eqn. (1) in [23]), and the use of KNN for estimating $\sigma$ (versus Eqn. (11) in [23]). Thus, I am not sure about the contributions of this work. However, I am sure that "TRIAGE is the first data characterization framework tailored to regression" is an overclaim.
- I feel that the writing is too wordy, especially in Section 5, and the authors use some LaTeX tricks which make the manuscript looks more compact than necessary. I think the authors are able to make Section 5 more concise, so that the main text could easily fit in 9 pages and the paper would have a much better shape.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. There are several confusing points in the manuscript that require clarification:
(a) In Algorithm 1, line 3, what do you mean by "nearest neighbor residuals of KNN"? Is it the average distance to the k nearest neighbors?
(b) In Eqn. (11) of [23], the definition of $\sigma$ considers two factors: (i) Whether a sample is close to its k nearest neighbors; (ii) Whether the k nearest neighbors have consistent labels. Does this work use the same definition for $\sigma$? If not, what are the differences?
(c) In the definition of the TRIAGE score $T(x,y,\theta)$ in lines 161-162, what do you mean by "$P(y \le f_\theta(x))$"? What is this probability taken over? If $x, y, f_\theta$ are all deterministic, why should there be a probability?
2. Can the authors make a thorough comparison between this work and [23], including the methodology, the algorithm, the experimental setup and results? This could help clarify the contributions of this work.
3. (Bonus) It would be great to make a theoretical connection between the proposed method and robust statistics. Consider a simple case, where $x$ is 1-dimensional, and the goal is to fit $y=f(x)$ on a dataset $(X,Y)$ with outliers. The dataset could follow Huber's contamination model. In such a scenario, I am curious about whether TRIAGE is provably better than the error baseline (as empirically compared in lines 244-260), and whether there is any guarantee on the performance of the predictor with the outliers removed using TRIAGE. A good starter for robust statistics could be the thesis of Jacob Steinhardt: https://cs.stanford.edu/~jsteinhardt/publications/thesis/paper.pdf. This is a bonus point, but such theoretical analysis could greatly enhance the contributions of this work.
**Post rebuttal note:** I have read the rebuttal and had a discussion with the authors. The authors have addressed most of my questions. I have raised my rating from 4 (original) to 6 (current).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: Limitations are discussed in Section 6.
**Review summary:** I think this work is interesting overall, and there are some good takeaways. However, the writing is confusing at times, and it also seems to me that a large part of this work is very similar to [23]. I believe many people will find this work useful, but I also believe that this work still needs improvement. So I give a borderline rating to this submission though I rarely give borderline ratings in my reviews. If the authors could address my questions during the rebuttal, I am willing to raise my score to 6 or 7, depending on the quality of the rebuttal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear ``R-oVcJ``.
Thank you for your thoughtful comments to improve the paper. We provide answers (A)-(D) & highlight updates to the paper.
# (A) Comparing TRIAGE to Ref [23]
We discuss differences & similarities to illustrate TRIAGE’s contribution.
**Differences:**
1. **Objective:** TRIAGE performs data characterization, scoring samples on their impact on a regressor, enabling data-centric tasks like data sculpting, dataset selection & feature acquisition. In contrast, [23] is conventional conformal prediction for predictive uncertainty estimation.
2. **Algorithm:** (1) TRIAGE uses conformal predictive distributions (CPDs) providing a full predictive distribution. This contrasts [23]’s prediction intervals, which are less informative than a predictive distribution (see L50-52). (2) TRIAGE’s novelty is studying the training dynamics of scores. In contrast, [23] computes prediction intervals *once* after training, not reflecting dynamic changes, which we show are vital to characterize differences between samples.
3. **Experiments:** TRIAGE tackles data-centric tasks like data sculpting (Sec 5.2), dataset selection (Sec 5.3.1) & feature collection/acquisition (Sec 5.3.2). In contrast, [23] evaluates the prediction intervals for predictive uncertainty (coverage & efficiency).
4. **Results:** While [23] tackles different tasks from TRIAGE, we adapt the CP intervals for sculpting (Sec 5.2.2). Table 2's baseline "*CP Intervals Sculpt*" corresponds to a conformal regressor as in [23]. We will clarify this in the revision. Table 2 shows that porting CP intervals for sculpting are less effective than TRIAGE tailored for this task.
**Similarities:** (i) We clarify that Eq 1 (similar to Eq.5 in [23]) is *NOT* the CPD score but simply a conformity score. The CPD is per Eq 2. We will amend L145-150 to clarify. (ii) we clarify that the similarities noted on conformity score & calibration sets usage are general elements fundamental to conformal prediction itself & not unique to any method. This is akin to loss functions (conformity scores) & validation sets (calibration sets).
**UPDATE:** Add Appendix discussing differences between TRIAGE & [23].
# (B) Clarifications
(i) Clarify KNN residuals: We compute the errors for all samples in $D_{cal}$. For sample $x_i$ we estimate its difficulty score $\sigma$ as the mean absolute errors of $x_i$’s k-nearest neighbors in $D_{cal}$, where K=5
(ii) Do we use the same KNN definition as [23]: [23] evaluates multiple variants of the KNN score, including Eq 11 (see Papadopoulos et al. (2011)). To clarify, we use the KNN definition in Sec 3.1 of [23], which we describe in (i)
(iii) Clarifying the probability in the TRIAGE score: The CPD in Eq. 2, denoted $Q(y)$, returns a predictive (probability) distribution. We discuss the necessary condition for this interpretation on L148-150. Then for a specific value $y$, the function returns the estimated probability $P(Y \leq y)$, where $Y$ - true target and $y$ - prediction $f_{\theta}(x)$. This score is then computed for the $E$ different checkpoint parameter values.
# (C) Streamline Sec. 5
Based on your suggestion, we will streamline the writing in Sec. 5, allowing us to expand the Figs side-by-side across the page width.
# (D) Bonus - TRIAGE & Robust Statistics
We are grateful for the reviewer's suggestion & for sharing the thesis resource.
(i) **TRIAGE vs. Robust Statistics**:
Motivated by your question, we contrast TRIAGE & Robust Statistics. (1) Post-hoc vs Built-in: TRIAGE wraps a regressor to detect and sculpt outlier samples, whereas Robust Statistics embeds outlier resilience within the model [R1,R2], e.g. via Huber loss [R2]. (2) Additional data-centric applications: TRIAGE tackles diverse "data-centric AI" tasks, like comparing synthetic data (Sec. 5.3.1) and feature acquisition/collection (Sec. 5.3.2), which is beyond the scope of robust statistics.
(ii) **Theoretical Analysis**:
Connecting TRIAGE theoretically to robust statistics is an intriguing question. However, we highlight two important challenges:
- Interdependence of CPD and training dynamics in TRIAGE. Disentangling their impacts theoretically is non-trivial.
- Scores across training epochs are correlated due to iterative model training. This highlights the complexity of any theoretical guarantee, given the dynamic nature of the scores & their correlation.
Given the complexities, we will discuss the theoretical proof as future work in Sec. 6, citing the provided thesis and the suggested Huber contamination setting. We will also add an Appendix outlining the aforementioned challenges.
As an initial step, we provide simulations.
(iii) **Simulation using Huber's Contamination Model**:
Our simulation setup mirrors [R3], generating data from a linear model ($y = X\beta + \eta$), with $X\sim U[0,1]$ and $\eta\sim \mathcal{N}(0,1)$. Mimicking Huber's model we contaminate the response $y$ corrupting $\epsilon$ samples, replacing ($\eta_i$) with ($\eta_i + c_i$), where ($c_i$) comes from a different distribution.
We compare TRIAGE with:
• Error baseline
• Training with Huber Loss
• TRIAGE applied to a Huber Loss trained model.
The results in the **uploaded pdf (Fig 1)** show TRIAGE has lower MSE vs the error baseline as $\epsilon$ rises. TRIAGE is also stable in response to contamination by virtue of the clean calibration set. Interestingly, combining TRIAGE with a model trained using Huber’s loss proves superior to using either alone, highlighting the compatibility of TRIAGE with robust techniques.
**UPDATE**:
- Add discussion about proof to Sec.6, citing the Huber setting & the resource (thesis)[R1]
- New Appendix to discuss the challenges of a theoretical proof
- New Appendix with the simulation results
[R1] J. Steinhardt. Robust learning: Information theory and algorithms
[R2] R. Maronna, et al. Robust statistics: theory and methods
[R3] M. Chen,, et al. A general decision theory for Huber’s ∈-contamination model
---
Rebuttal Comment 1.1:
Comment: Thank you for this rebuttal. Here is a list of changes I suggest the authors make to the paper:
- (A): Add this comparison to the paper.
- (B): Change line 3 of Algorithm 1 to "the average distance to k-nearest-neighbors".
- (C): Improve the writing of Section 5.
- (D): Add this discussion to the paper.
I have one follow-up question: In rebuttal (B), (iii), I still cannot see on which distribution the probability is taken. Let me pose this question in a clearer way. In line 176 of the submission, you wrote: $C(x^i, y^i) = \frac{1}{E} \sum_{e=1}^E T(x^i, y^i, \theta _ e) $, which is equal to $\frac{1}{E} \sum_{e=1}^E P(y^i \le f _{\theta _e}(x^i))$. Could you tell me on which distribution is the probability $P(y^i \le f _{\theta _e}(x^i))$ taken over?
---
Reply to Comment 1.1.1:
Title: Paper updates & clarification
Comment: Dear ``R-oVcJ``
Thank you for your feedback on the rebuttal.
----
### (1) Incorporating changes into the updated paper
We will definitely include your suggested changes which have come from our discussions on (A)-(D). These points will be integrated into the revised paper in the sections identified within the **UPDATE** blocks of our response. Thank you for your help in improving the paper!
-----
### (2) Clarification
To clarify Conformal Predictive Systems output valid cumulative distribution functions, which are termed Conformal Predictive Distributions (CPD). This is the cumulative probability with respect to a label $y$ , given some $x$ and regressor $f$. With CPDs denoted as $Q$, the conformal p-values get arranged into a probability distribution which has the properties of a CDF — thus essentially becoming probabilities, see [22] for more details. *Appendix A.3* outlines the conditions necessary for $Q$ to be related to a CDF.
Since the CPD has the properties of a CDF, we use the CPD to estimate probabilities that the true target $y$ is less than or equal to a specified threshold/value. Thus, when you ask about the distribution over which the probability is calculated, it's the CPD that provides the probability estimation.
To be precise, we evaluate the function $Q$ for a specific $f_{\theta_{e}}(x)$ to get the estimated probability $P(y \leq f_{\theta_{e}}(x))$. We then do this for all $f_{\theta_{e}}$ checkpoints where $e \in E$ to get the trajectory of TRIAGE scores for sample $x$.
We hope this response clarifies and we will incorporate this more detailed explanation into the revised paper.
----
We are grateful for the reviewer's time and suggestions, which have strengthened the paper. We hope these changes address the reviewer's concerns. If you have any other comments or concerns, please let us know. We would be happy to do our utmost to address them.
Paper 11604 Authors
[22] Vladimir Vovk, Ivan Petej, Ilia Nouretdinov, Valery Manokhin, and Alexander Gammerman. Computationally efficient versions of conformal predictive distributions. Neurocomputing, 2020 | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful and positive feedback!
We are encouraged that they found the "problem being studied and data-centric perspectives attractive and important” (**R-ckq8**) for "real applications" (**R-oVcJ**) and that TRIAGE is a principled (**R-pNo8**) and “novel” (**R-pEXt,R-N4d8**) data characterization framework in regression settings. Further, they found TRIAGE’s use of conformal predictive systems as "intuitive and effective” (**R-pEXt**) with " analysis of training dynamics from checkpoints is interesting” (**R-ckq8**) — with the "extensive experiments" (**R-pNo8**) supporting the “many interesting use cases” (**R-ckq8**), such as being a "good tool to do data selection or feature selection” (**R-N4d8**).
We address specific questions and concerns below and highlight updates based on reviewer suggestions that will be incorporated into the revised manuscript.
We have uploaded a ``response pdf`` with additional experiments. These include:
* Simulation study with Huber’s contamination model
* Computational time for 2000-100k samples and compared to Data Valuation methods.
* Motivation for computing the TRIAGE score over all training steps
* Comparison to data valuation methods
On the basis of our clarifications and updates, we hope we have addressed the reviewers' concerns.
Thank you for your kind consideration!
Paper 11604 Authors
Pdf: /pdf/db0c1c8003146234053866b9edb863e9bead56ea.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper investigates the problem of training data characterization for regression problems. The authors noted that existing research on training data characterization mostly focuses on classification problems and there remains an absence of research for regression problems. This work proposes TRIAGE, a novel framework designed exclusively for regression settings and applies to a variety of tasks.
TRIAGE leverages conformal predictive distributions to provide a model-agnostic scoring method for evaluating how the model performs on each training sample. This framework also enables analysis into the training dynamics from checkpoints, visualizing the samples being under-/well-/over- estimated by the model during the training process.
The proposed framework is useful for a variety of data curation tasks, such as improving the model performance by throwing poorly-predicted training samples. Beyond sample-wise data analysis, the framework also applies to broader tasks such as dataset selection/feature acquisition. The work validates the proposed method with a range of empirical studies on 10 tableau datasets with 500-100k samples, showcasing its value in data characterization in practical cases.
Strengths: The problem being studied and data-centric perspectives are attractive and important. The work is well-motivated and nicely written with easy-to-follow illustrations.
The narrative of the paper is highly structured, skillful, intriguing and informative. I did enjoy reading this paper. The importance of the work is highlighted. References are substantial and high quality.
Objectives P1-P4 and the comparison table with baselines are clear and precise which can be informative and beneficial for a broader audience. And the improvements in these objectives are later validated in empirical studies, facilitating the verification of this work.
The technical body is clear. The analysis of training dynamics from checkpoints is interesting. Novel insights are provided in the takeaways. The work is practical with many interesting use cases.
Weaknesses: I am combining this part with the questions.
1. Since this work is situated in the context of data-centric AI and also studies data characterization problems, it is a bit unnatural to miss out on the comparisons with data valuation methods.
For example, Shapley-based methods are a common benchmark for the general interpretability of ML applications and also apply to characterizing the effects of training data on regression tasks. Also, I was wondering how model-agnostic data valuation ([1]) pipelines work in this setting.
I would like to see discussions both conceptually and empirically on how those methods perform in the context this paper studies and how they differentiate from the proposed approach.
[1] Lava: Data valuation without pre-specified learning algorithms, ICLR 2023
2. What is the rationality for throwing out data that is over/under-predicted? It could be outliers, but would simply throwing out such data hurt the generalizability of the model or its robustness against distributional shifts, which is prevalent in real-world applications? Do the authors have any empirical study results on such cases?
3. Data selection is an established field of research. This work showcases the proposed approach apply to dataset selection tasks. It would be beneficial if the authors could provide a direct comparison with data selection baselines and visualize their performance.
It won't harm the contribution of this paper if its data selection performance is suboptimal compared to methods designed exclusively for data selection tasks. But I think it is important to know how they compare exactly and benchmark the gap, which would be helpful for future works to compare with or improve over.
4. What is the computational overhead of the proposed framework? Can the authors provide the computation time? How scalable is the proposed framework? What is the largest scale the authors are able to apply it to?
The text in Figure 1 can be made larger.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See Weaknessnes.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: No major limitations. Questions for discussion are listed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear ``R-ckq8``.
Thank you for your thoughtful comments to improve the paper. We provide answers (A)-(D) & highlight updates to the paper
# (A) Additional comparisons
Thanks for suggesting an empirical & conceptual comparison to valuation methods (e.g. Shapley-based & LAVA) to strengthen the results.
First, we clarify that the Gradient Norm (GraNd) baseline [12] from the main paper falls into the data selection literature. Besides showing TRIAGE improves upon GraNd. One key disadvantage of selection via GraNd links to (P1) Consistent characterization, where we show in Fig 4 that the GraNd selector is not consistent across model parameterizations with a Spearman correlation of 0.6 compared to TRIAGE’s 0.91. This means [12] would select different subsets for different models compared to TRIAGE. Additionally, GraNd only applies to neural nets which is less flexible than TRIAGE.
*We now compare to data selection w/ data valuation.*
**(i) Experimentally:** We compare two Shapley valuation methods: (i) TMC-Shapley [R1] & (ii) KNN-Shapley [R2] and LAVA [R3] as suggested, which uses optimal transport. We compare in the context of Table 2- data selection for sculpting. The TRIAGE $D_{cal}$ is used as the validation set for these methods.
Note - TMC-Shapley: was computationally unfeasible for all 20k samples. Hence, we sample 5k and compare it to TRIAGE separately. (ii) KNN-Shapley & LAVA: run on the original 20k samples.
**Performance:** The results are shown in the **response pdf (Table 1 & 2)**. These methods are competitive with TRIAGE. However, TRIAGE tailored to regression outperforms them on downstream MSE performance.
**Computational time:** We assess compute time vs TRIAGE and show for different sizes of $D_{cal}$ that TRIAGE is more time efficient & unlike Shapley methods, our cost doesn’t increase with the size of the $D_{cal}$. KNN-Shapley is 1-3X & TMC-Shapley 600X more expensive than TRIAGE. We show the results in **Fig. 2 in the response PDF**.
**UPDATE:** Include new results in Table 2 and add a new Appendix to discuss the new computational time experiments.
**(ii) Conceptually:** Differences between valuation methods & TRIAGE are: (a) TRIAGE unlocks other data-centric tasks beyond just data selection, e.g. feature-level acquisition/collection and selection between datasets. These are out of scope for data “sample” selection; (b) Unlike TRIAGE, these methods are not tailored to regression; (c) method differences: computationally Shapley-based methods need to assess multiple sample permutations & need to retrain the model many times. Hence, they struggle to scale to very high-samples sizes (e.g. 100k), unlike TRIAGE where the cost is cheap comparatively (we experimentally compare this later). LAVA uses an additional embedding model to reduce the dimensionality before the optimal transport step.
**UPDATE:** Include a discussion with references on data valuation (Shapley & LAVA) in the related work (Sec. 2).
# (B) Data sculpting under distribution shift
We agree that the distribution shift setting is interesting to assess. Our experiment in **Section 5.2.2: "Data sculpting to fit a deployment purpose"** evaluates this setting, where we sculpt $D_{train}$ of US patients to perform well on a different distribution of UK patients at deployment time. We apologize that this was unclear and will update the text to clarify.
Table 2 shows that with a small UK calibration set, TRIAGE sculpts the US data so that the regressor generalizes well on the UK test set, achieving the lowest MSE compared to baselines.
This highlights the importance of calibration set construction: even in a scenario with a distribution shift, good performance is attainable with TRIAGE by calibrating with a handful of relevant samples. We will include a discussion on point in Sec. 5.2.2.
Additionally, in **Appendix C.6.1**, we look at generalization for distinct patient groups, specifically the minority sized group. In this imbalanced setup, TRIAGE due to calibration preserves the performance for the minority group, contrasting favorably with baselines. We apologize that this was obscured, given the numerous appendices and will refer to this more prominently than our initial mention on L330-331.
Finally, on insights, we refer the reviewer to **Appendix C.4**, where we show a radial plot (Fig. 14) providing insights into why we sculpt certain samples (over/under-estimated) — thereby allowing us to match our deployment setting better and generalize better. We use the extra camera ready page to add Fig 14 to the main paper to anchor Table 2 with the insights.
**UPDATE:** Improve description of Sec. 5.2.2 to convey this is a distribution shift setting, better flag the results of Appendix C.6.1 in the main paper, which assesses minority and majority sized group performance, move the radial plot (Fig. 14) to the main paper to provide further insights on sculpting.
# (C) Computational time
Thank you for the suggestion. We agree that analyzing the time cost to compute TRIAGE scores is important. We have run a new experiment where we vary the dataset size from 2000-100k samples and assess the TRIAGE computational time. Naturally, as the size increases, so does computational time. However, even at 100k samples computing the TRIAGE scores for all 100k takes <2min highlighting TRIAGE’s time efficiency to scale to larger data sizes. We show the results in **Fig. 2(a) of the response pdf uploaded**. We have also compared computational time to valuation methods as mentioned in (A).
**UPDATE:** Add a new Appendix to include the computational time results.
# (D) Text size Fig. 1
We will increase the font size in Fig. 1 for readability.
[R1] Ghorbani, A., & Zou, J. Data shapley: Equitable valuation of data for machine learning.
[R2] Jia, R et al. Efficient task-specific data valuation for nearest neighbor algorithms.
[R3] Just, H. A et al. LAVA: Data valuation without pre-specified learning algorithms.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I appreciate the authors for their dedicated work and thanks for the response to my comments. My questions have been adequately discussed and I have no further comments at this moment. I would keep my positive rating in support of this work.
I hope the authors compile the new results and additional discussions into the paper or its Appendix.
Nice work and good luck,
Reviewer ckq8
---
Reply to Comment 1.1.1:
Comment: Dear ``R-ckq8``
We are glad our response addressed your comments. In the revised paper, we will definitely include these new discussions and results.
Thanks for your positive feedback and suggestions which have helped us improve the paper!
Regards
Paper 11604 Authors | null | null | null | null | null | null |
Contextual Gaussian Process Bandits with Neural Networks | Accept (poster) | Summary: The paper proposes extension of the Gaussian Process (GP) based contextual bandits to the contextual case, where the reward function is time and context dependent. The proposed method models the context dependency via a multi-output GP and its relationship to the actions via a neural network. The inner product of the outputs of the GP and the neural network determines the reward.
Strengths: The paper is well-written, its language is clear, experiments thorough, results strong, and the theoretical analysis sound.
Despite the theoretical depth of the work, the paper studies two challenging and interesting use cases as bandit problems: queuing with LSTMs and pricing with graph convolutional neural nets. This gives a paper a solid standing. The graph application comes nicely together with the kernelized nature of the reward definition.
The results reported in Figures 3-6 are intriguing.
Weaknesses: Algorithm 1 has some novel aspects such as the way mu and sigma are calculated but the methodological novelty is rather incremental.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It does not look clear to me what the purpose of Section 3.3. What do we expect from studying the maximum information gain aspect?
Why is the proposed method not compared against NeuralUCB also in queuing and pricing tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss the limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for the valuable time, questions and comments. We find them very helpful. Please find our response to the comments below.
$\textbf{Comment 1: }$Algorithm 1 has some novel aspects such as the way mu and sigma are calculated but the methodological novelty is rather incremental.
$\textbf{Response:}$ We thank the reviewer for the comment. We recognize the extensive work done on GP-based methods. However, the novelty of our proposed NN-AGP model lies in the innovative integration of both GP and NN models, which not only enhances the performance but also broadens the applicability of the bandit algorithm.
In addition, our approach uniquely retains the GP structure regarding the decision variable $\mathbf{x}$, maintaining the key statistical properties and explicit uncertainty quantification inherent to GP. This structure ensures that bandit algorithms utilizing NN-AGP can seamlessly leverage the power of established GP-based acquisition functions designed for different settings and scenarios. As a case in point, while our main text focuses on the UCB acquisition function, we also delve into other acquisition functions in Section 6 of the supplements. In conclusion, we specifically retain the existing methodologies for GP while incorporating the NN model to enhance the bandit algorithm performance.
$\textbf{Comment 2: }$It does not look clear to me what the purpose of Section 3.3. What do we expect from studying the maximum information gain aspect?
$\textbf{Response:}$ We appreciate this constructive comment. In Section 3.2, we prove that the cumulative regret in terms of rewards of the bandit algorithm NN-AGP-UCB is upper-bounded by $\tilde{\mathcal{O}}\left(\sqrt{T\gamma_T}\right)$. Here $\gamma_T$ is the maximum information gain of the proposed model NN-AGP, and we study it in Section 3.3 to provide the upper bound on the cumulative regret as a function of the total number of rounds $T$.
$\textbf{Comment 3: }$Why is the proposed method not compared against NeuralUCB also in queuing and pricing tasks?
$\textbf{Response: }$We appreciate this constructive comment. In queuing and pricing tasks, the contextual variables are complex and represented by time series and graphs. We select the long short-term memory (LSTM) model and the graph convolutional neural network (GCN) to model the mappings from these contextual variables in our NN-AGP-UCB algorithm. In contrast, NeuralUCB (as well as NN-UCB) is designed for vector-valued contextual variables and employs fully connected networks (FCN). Therefore, NeuralUCB and NN-UCB are not applicable in queuing and pricing tasks because of the complex contextual variables.
$\textbf{Comment 4: }$The paper does not discuss the limitations of the proposed approach.
$\textbf{Response:}$ We appreciate this constructive comment. We will definitely follow the comment and enhance the discussion in the next iteration. One of the limitations of our work is the cold-start issue during the initial rounds. That is, there are no sufficient data for learning the neural network in NN-AGP. This issue can be addressed by the transfer learning technology [1]. That is, at the beginning of NN-AGP-UCB, we can retain some layers of the neural network in NN-AGP, which are previously learned in similar tasks. This incorporation with transfer learning is also mentioned in the conclusion section.
To illustrate the transfer learning technology with NN-AGP-UCB, we conduct numerical experiments and present the results in $\textbf{Figure 1}$ of the document attached in the global response. Specifically, we consider an unknown reward function $f_{T}$ and we also have access to functions $f_{s},s=1,2,\ldots,5$ that have a similar structure with $f_T$. We first sample each $f_s$ for 50 or 100 rounds and learn an NN-AGP model with these samples. The NN component in NN-AGP helps to transfer the knowledge from $f_s$ to $f_T$. That is, during the initial rounds of NN-AGP-UCB with $f_T$, we first fix the input layer of the pretrained NN and update the remaining layers with the new data, which is a widely-used transfer learning method named freezing. Experimental results indicate that transfer learning from similar tasks helps to address the cold-start issue, and NN-AGP-UCB with/without transfer learning will converge to the similar regrets as the round increases.
Another limitation is due to the computational cost of NN-AGP. As mentioned in the conclusion section, since NN-AGP retains a GP structure, it suffers from computational complexity with large data sets. For future work, we consider sparse NN-AGP to alleviate the computational burden; see also a discussion in Section 9 of the supplements. We will include a more detailed and explicit discussion on the limitations in the revised main text.
$\textbf{References:}$
$[1]$ Weiss, K., Khoshgoftaar, T.M. and Wang, D., 2016. A survey of transfer learning. Journal of Big data, 3(1), pp.1-40.
---
Rebuttal Comment 1.1:
Title: Answers satisfactory
Comment: Thanks for your clear answers to my questions. I keep my view that this is a solid piece of work and also my score unchanged.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your time and comments! | Summary: This work studies the contextual bandit problem with the Gaussian process. Authors tried to use neural networks to learn reward functions and introduce one algorithm NN-AGP. Then conduct empirical evaluation and theoretical analysis for it.
Strengths: Recently, the bandit community paid more and more attention to the neural network approximation. It is interesting to combine it with GP in bandits. Authors use extensive experiments to demonstrate its practicability and conduct rigorous regret analysis for it.
Weaknesses: (1) This work is an extension of [37] but very similar is very high. The key difference from [37] is changing the reward function to the inner product of $g(\theta)$ and $p(x)$, where $g$ is the neural network function and $p$ is the Gaussian process. The optimization of $p$ and the UCB-based exploration is adapted from [37]. But I don't see very novel aspects from $g$ except that it can be replaced by different neural network functions. Therefore, I think the overall novelty of this paper is limited.
(2) The required assumption of regret analysis is strong. It requires that the space of $\theta$ is convex and compact, but it is known the parameter space of neural network is non-convex and not compact. Moreover, the analysis is based on [37] and [34,35] (the information gain part). Authors may want to discuss more about [34, 35] from analysis aspect.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: (1) In Figure 2, I am wondering why there is a big jump first and then starts dropping.
(2) How about the running time cost (portion) of GP in this algorithm?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Strong assumption and incremental novelty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable time, questions and comments. We find them very helpful. Please find our response to the comments below.
$\textbf{Comment 1: }$This work is an extension of [37] but very similar is very high. The key difference from [37] is changing the reward function to the inner product of $g(\theta)$ and $p(x)$, where $g$ is the neural network function and $p$ is the Gaussian process. The optimization of $p$ and the UCB-based exploration is adapted from [37]. But I don't see very novel aspects from $g$ except that it can be replaced by different neural network functions. Therefore, I think the overall novelty of this paper is limited.
$\textbf{Response:}$ We thank the reviewer for the comment. Please allow us to clarify more of the novelty and associated comparisonal advantage of our work compared to the literature. Our work considers the contextual Gaussian process bandit problem, which was also addressed in [1] ([37] in main text) as well. The novelty of our proposed NN-AGP model lies in the innovative integration of both GP and NN models, which partially addresses the challenge of pre-specifying an appropriate joint GP model for approximating the reward function. Based on the numerical experiments, our approach outperforms the existing GP-based bandit methods by specifying the data-driven kernel function through the lens of neural networks. In addition, by employing different structures of neural networks (e.g., graph neural network), our approach is applicable to diverse application scenarios where the contextual variable is represented in complex forms other than a vector. We hope that the above clarification may partially alleviate the potential concern.
$\textbf{Comment 2: }$The required assumption of regret analysis is strong. It requires that the space of $\boldsymbol{\theta}$ is convex and compact, but it is known the parameter space of neural network is non-convex and not compact. Moreover, the analysis is based on [37] and [34,35] (the information gain part). Authors may want to discuss more about [34, 35] from analysis aspect.
$\textbf{Response:}$ We agree with the reviewer's comment that the parameter space of neural network is non-convex and not compact. In fact, we would like to clarify that parameter $\boldsymbol{\theta}$ in our work does not represent the neural network parameters. The parameter $\boldsymbol{\theta}$ denotes the contextual variable that is input to the reward function (Line 82, Page 3 in our manuscript).
Compared to assumptions on the neural network parameters, the assumption of convexity and compactness on the set of contextual variables appears to be more common; see also [1]. Indeed, for NN-AGP-UCB, this assumption can be relaxed to that $\sup_{\boldsymbol{\theta}\in \Theta} \\{ \\{ |\sum\_{l=1}^m \boldsymbol{g}\_{l}(\boldsymbol{\theta})a\_{l,q} | \\}\_{q=1}^Q , \\{ | \boldsymbol{g}\_{l}(\boldsymbol{\theta}) | \\}^m\_{l=1} \\}$ and $\sup_{\boldsymbol{\theta}\in \Theta}||\boldsymbol{g}\left(\boldsymbol{\theta}\right)||_2^2$ exist.
In addition, our theoretical analysis borrows ideas from [1] and [2], and we mention it in Section 8 of the supplements, considering the length of main text. We also include a comparison of our theoretical results with those in [1] in the supplements. We will discuss more about [1,2] from analysis aspect in the revised main text.
$\textbf{Comment 3: }$In Figure 2, I am wondering why there is a big jump first and then starts dropping. (2) How about the running time cost (portion) of GP in this algorithm?
$\textbf{Response:}$ We appreciate the opportunity to clarify the appearance of a ''big jump" in Figure 2 of the main text. In fact, the mean of average regrets of all the compared algorithms decrease as the number of rounds increases and there is no ''jump'' in the numerical results. The ''jump'' in Figure 2 is actually the overlap of the shadowed regions of the curves, which indicate the standard deviation of average regrets. In contrast with Figure 1, Figure 2 is associated with a function with higher-dimensional decision and contextual variables. Consequently, there's heightened uncertainty in the initial stages illustrated in Figure 2. We will clarify the figure in the revised manuscript.
In terms of the running time, we record 1) the training time that constructs the surrogate model based on the historical data and 2) the execution time that selects the decision variable after the contextual variable is revealed. We record the time (seconds) for exactly one round in the 50-th, 100-th, and 300-th rounds. We take the first set of experiments in Section 4.1 as an example and present the results in $\textbf{Table 1}$ of the document attached in the global response.
We notice that CGP-UCB is the most efficient in training time since it employs a pre-specified GP model which does not update during iterations. On the other hand, all the algorithms that involve NN require learning NN from data and longer training time than CGP-UCB. In terms of the execution time, NN-AGP-UCB requires similar time as CGP-UCB, since the selection of the decision variable of NN-AGP-UCB is based on GP as well. To sum up, the training time is largely due to learning NN from the data, while the execution time is for selecting a decision variable based on the GP model. We will add this information to the supplements. In addition, we consider sparse NN-AGP to alleviate the computational burden for future work; see also a discussion in Section 9 of the supplements.
$\textbf{References:}$
$[1]$ Krause, A. and Ong, C., 2011. Contextual gaussian process bandit optimization. Advances in neural information processing systems, 24.
$[2]$ Srinivas, N., Krause, A., Kakade, S.M. and Seeger, M., 2009. Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am wondering what is the complexity of $\gamma_T$ and how to bound it?
---
Reply to Comment 1.1.1:
Comment: We appreciate the opportunity to clarify the maximum information gain $\gamma_T$. Specifically, the upper bound of $\gamma_T$ depends on the kernel function used in the GP component of NN-AGP. We consider two general categories of kernel functions: polynomial eigendecay and exponential eigendecay (see details in Definition 1 on Page 6), which include the most commonly used kernel functions. For polynomial eigendecay, $\gamma_T = \mathcal{O} \left ( T^{\frac{1}{\alpha _p} }\log ^{1-\frac{1}{\alpha_p} }\left ( T \right ) \right )$, where $\alpha_p>1$ is a constant. For exponential eigendecay, $\gamma_T = \mathcal{O} \left ( \log ^{1+\frac{1}{\alpha_e} }\left ( T \right ) \right )$, where $\alpha_e>0$ is a constant. More detailed results on the upper bound of $\gamma_T$ are summarized as Theorem 2 on Page 6, and we will provide this simplified expression in the revised version. In order to provide this upper bound of maximum information gain, we first explore the Mercer decomposition and the eigenvalues associated with NN-AGP. Then, we analyze the upper bound of the maximum information gain through the eigenvalues. The details are postponed to Section 8.2 of the supplements considering the length of the main text. | Summary: This paper proposes a reward model, called the neural network-accompanied Gaussian process (NN-AGP) for solving contextual bandit problems where the space of contexts and the space of decision variables may be continuous. This model is an inner product of a neural network and a multi-output GP. The neural network captures the dependence of the reward function on the contextual variables while the GP is to model the mapping from the decision variable to the reward. The authors propose the NN-AGP-UCB algorithm for this problem in the form of the upper confidence bound strategy. They derive the regret for NN-AGP-UCB as well as the maximum information gain in their setting. Finally, they provide the experiments to evaluate their proposed algorithm for complex reward functions, including those with time-varying dependence on sequenced and graph-structured contextual variables.
Strengths: - The paper introduces a reward model for contextual Gaussian process bandits which is more general than the previous work by Krause el al [37] by employing a neural network to capture the dependence of the reward function on the contextual variables. This allows to use different neural networks appropriate for applications with diverse structures of contextual information. This is also demonstrated in their experimental results.
- The regret analysis and the upper bound of the maximum information gain are provided under this model.
- The experimental results on different structures of context information are also a strong point of this paper.
Weaknesses: - As defined, the reward model is an inner product of a neural network (NN) and a multi-output GP. This reward structure seems not natural compared to the ones which are entirely either GP or NN. In addition, there are also many possible combinations of a NN and a GP to construct a reward model. Therefore, a more general model would be better.
- In the conclusion section, the authors claimed that the advantage of their approach is the approximation accuracy for the reward function and better performance on cumulative rewards/regrets. However, this is not correct. Using a NN to model reward in overparameterized regime allows the approximation accuracy for the reward function. Moreover, they said their approach has better performance on regrets. However, it is not clear which related works they are comparing with.
- It lacks the comparison of the regret bounds of the proposed algorithm and related algorithms like NeuralUCB, NeuralTS, and NeuralLinUCB which use entirely a NN, and algorithms which use entirely a GP to model the reward function.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Please see the questions in the Weaknesses section.
- It would be interesting if the authors take into consideration the influence of the proposed NN in their regret analysis, at least in the overparametrized regime.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable time, questions and comments. We find them very helpful. Please find our response to the comments below.
$\textbf{Comment 1: }$As defined, the reward model is an inner product of a neural network (NN) and a multi-output GP. This reward structure seems not natural compared to the ones which are entirely either GP or NN. In addition, there are also many possible combinations of a NN and a GP to construct a reward model. Therefore, a more general model would be better.
$\textbf{Responses:}$ We completely agree with the reviewer on the many possible combinations of NN and GP. Our particular choice of combining a neural network and a multi-output GP as an inner product is motivated by the differentiation between the decision variable and the contextual variable. By doing so, we ensure an explicit GP expression for the decision variable $\mathbf{x}$ once the contextual variable is observed in each round. Such design not only facilitates explicit quantification of reward function approximation uncertainty, guiding decision variable selection, but also provides theoretical regret bound guarantees. While there are various ways to combine NN and GP, a more general model might compromise the explicitness of the GP expression for $\mathbf{x}$. Our approach leverages the strong flexibility and approximation accuracy of NN while preserving statistical properties of GP, and appears to have reliable numerical performance. We plan to add a detailed remark to reflect this helpful comment.
$\textbf{Comment 2: }$In the conclusion section, the authors claimed that the advantage of their approach is the approximation accuracy for the reward function and better performance on cumulative rewards/regrets. However, this is not correct. Using a NN to model reward in overparameterized regime allows the approximation accuracy for the reward function. Moreover, they said their approach has better performance on regrets. However, it is not clear which related works they are comparing with.
$\textbf{Response:}$ We thank the reviewer for the comment. We would like to clarify that the advantage of approximation accuracy of NN-AGP is over the joint GP model. We also admit that using overparametrized NN to model the reward function generally leads to better approximation accuracy provided sufficient data. However, there are two challenges of the algorithms that are entirely based on NN. First, there is no explicit uncertainty quantification of NN approximation. Thus, addressing the exploration-exploitation trade-off requires further approximation, which affects the performance of bandit algorithms. Second, the overparametrized NN requires sufficient data to train, which might not be applicable in bandit problems, especially in initial rounds. Therefore, our NN-AGP-UCB algorithm achieves better performance on regrets than both the algorithms that entirely rely on GP (CGP-UCB) and the algorithms that entirely rely on NN (NeuralUCB and NN-UCB), which are also supported by experimental results. We will revise the conclusion section to provide a more clear statement and more careful descriptions.
$\textbf{Comment 3: }$It lacks the comparison of the regret bounds of the proposed algorithm and related algorithms like NeuralUCB, NeuralTS, and NeuralLinUCB which use entirely a NN, and algorithms which use entirely a GP to model the reward function.
$\textbf{Response:}$ We thank the reviewer for the comment. In terms of the algorithm which uses entirely a GP model, CGP-UCB [1], we postpone the comparison to the supplements (the end of Section 8.1 on Page 18). Specifically, NN-AGP-UCB has a same bound of $\tilde{\mathcal{O}}\left(\sqrt{T\gamma_T}\right)$ as CGP-UCB, but is superior when the contextual variable dimension is high. In terms of the algorithms which use entirely a NN, we note that NeuralUCB [2], Neural TS [3] and Neural LinUCB [4] all consider the scenarios when the decision variable $\mathbf{x}$ is selected from a finite set. In comparison, we consider that $\mathbf{x}$ is selected from a continuous set (Line 87, Page 2). When performed on a finite feasible set $\mathcal{X}$, our NN-AGP-UCB also has a regret bound of $\tilde{\mathcal{O}}\left(\sqrt{T\gamma_T}\right)$, where $\gamma_T$ further depends on the kernel function of the GP component used in NN-AGP. When the kernel function has an exponential eigendecay (see Definition 1 in Line 242, Page 6), NN-AGP-UCB has a regret bound of $\tilde{\mathcal{O}}\left(\sqrt{T}\right)$, matching the regret bound of NeuralUCB, Neural TS and Neural LinUCB as well.
$\textbf{Comment 4: }$It would be interesting if the authors take into consideration the influence of the proposed NN in their regret analysis, at least in the overparametrized regime.
$\textbf{Response:}$ We thank the reviewer for the constructive suggestion. In Section 7 of the supplements, we provide an algorithm NN-AGP-UCB+, which accounts for the neural network approximation error and performs more conservatively than NN-AGP-UCB. We also provide the regret bound of NN-AGP-UCB+. We agree that the overparametrized regime would be an inspiring direction for future work and the neural tangent kernel (NTK) can be employed to analyze the regret.
$\textbf{References:}$
$[1]$ Krause, A. and Ong, C., 2011. Contextual gaussian process bandit optimization. Advances in neural information processing systems, 24.
$[2]$ Zhou, D., Li, L. and Gu, Q., 2020, November. Neural contextual bandits with ucb-based exploration. In International Conference on Machine Learning (pp. 11492-11502). PMLR.
$[3]$ ZHANG, W., Zhou, D., Li, L. and Gu, Q., 2020, October. Neural Thompson Sampling. In International Conference on Learning Representations.
$[4]$ Xu, P., Wen, Z., Zhao, H. and Gu, Q., 2020. Neural contextual bandits with deep representation and shallow exploration. arXiv preprint arXiv:2012.01780.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I keep my current score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your time and comments. | Summary: This work proposes a NN accompanied GP model. It leverages NN to approximate the unknown reward function regarding the context variable and maintains a GP with the decision variable. By introducing NN, the proposed method offers a better approximation accuracy. Theoretical implications, including maximum information gain and regret bounds, are provided for the proposed method. The effectiveness of the proposed method is also supported by empirical evaluation on both a synthetic and real-world dataset.
Strengths: 1. A well-motivated and well-designed algorithm with solid theoretical analysis of the statistical properties of the proposed model.
2. The proposed model is flexible enough to be used together with different types of NN models.
3. Empirical evaluation of a diverse set of tasks is included and shows the promising effectiveness of the proposed method.
Weaknesses: 1. Potential limitations of the model: Although NN could help enable a better approximation accuracy on the reward function regarding the context variable, the training of NN models may make the model computationally prohibitive. In addition, NN typically works well with a large amount of data, which may cause a cold-start issue when the proposed model is used.
2. May need to include more metrics in the empirical evaluation, such as computation cost and prediction latency.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Can the authors provide corresponding justifications or empirical evidence on the potential limitations of the proposed method mentioned in the weaknesses section?
I am willing to adjust my rating based on the authors' response to my question above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for the valuable time, questions and comments. We find them very helpful. Please find our response to the comments below.
$\textbf{Comment 1: }$Potential limitations of the model: Although NN could help enable a better approximation accuracy on the reward function regarding the context variable, the training of NN models may make the model computationally prohibitive. In addition, NN typically works well with a large amount of data, which may cause a cold-start issue when the proposed model is used.
$\textbf{Response:}$ We admit that incorporating NN into bandit problems generally requires sufficient data to approximate the unknown reward function. Thus, the cold-start issue is brought to, in principle, all bandit algorithms that use NN. Compared with the algorithms that fully rely on NN (e.g., NeuralUCB and NN-UCB), our NN-AGP-UCB actually suffers less from the cold-start issue, which is supported by numerical results in Section 4.1. The reason is that, in existing NN-based bandit algorithms, NN is responsible for approximating the entire reward function. In comparison, for the NN-AGP model, NN is specially used for approximating the mapping from the contextual variable to the reward function, while the approximation regarding the decision variable is supported by GP. It has been widely accepted that GP generally requires less data than NN in practical applications, and therefore NN-AGP helps to ease the cold-start issue.
Moreover, to further address the cold-start issue, the transfer learning technology [1] can be incorporated as mentioned in the conclusion section. We conduct numerical experiments and present the results in $\textbf{Figure 1}$ of the document attached in the global response. Specifically, we consider an unknown reward function $f_{T}$ and we also have access to functions $f_{s},s=1,2,\ldots,5$ that have a similar structure with $f_T$. We first sample each $f_s$ for 50 or 100 rounds and learn an NN-AGP model with these samples. The NN component in NN-AGP helps to transfer the knowledge from $f_s$ to $f_T$. That is, during the initial rounds of NN-AGP-UCB with $f_T$, we first fix the input layer of the pretrained NN and update the remaining layers with the new data, which is a widely-used transfer learning method named freezing.
Experimental results indicate that transfer learning from similar tasks helps to address the cold-start issue, and NN-AGP-UCB with/without transfer learning will converge to the similar regrets as the round increases. We also not that, to the best of our knowledge, there has not been extensive work on transfer learning with NN-based bandit algorithms. These NN-based bandit algorithms largely rely on the neural tangent kernel (NTK) to address the exploration-exploitation trade-off when selecting the decision variable. However, it remains an open question on how to transfer the knowledge between different domains with NTK. In comparison, the exploration-exploitation trade-off in NN-AGP-UCB is supported by GP, and existing transfer learning technologies with NN can be easily adapted into our algorithm. Other methodologies for addressing cold-start in learning NN in an online setting can also be employed; see [2,3]. We will add a remark addressing this to our revised introduction.
$\textbf{Comment 2: }$May need to include more metrics in the empirical evaluation, such as computation cost and prediction latency.
$\textbf{Response:}$ We appreciate this comment. We record 1) the training time that constructs the surrogate model based on the historical data and 2) the execution time that selects the decision variable after the contextual variable is revealed. We record the time (seconds) for exactly one round in the 50-th, 100-th, and 300-th rounds. We take the first set of experiments in Section 4.1 as an example and present the results in $\textbf{Table 1}$ of the document attached in the global response.
We notice that CGP-UCB is the most efficient in training time since it employs a pre-specified GP model which does not update during iterations. The training procedure of CGP-UCB only requires matrix operations, which can be implemented efficiently. On the other hand, all the algorithms that involve NN require learning NN from data and longer training time than CGP-UCB. In terms of the execution time, NN-AGP-UCB requires similar time as CGP-UCB, since the selection of the decision variable of NN-AGP-UCB is based on GP as well. We also note that both NN-UCB and NeuralUCB are initially designed for finite selections of decision variables. Thus, the computational cost of the execution time is largely due to searching for the optimal decision variable from the discretized feasible set. We will add this information to the supplements. In addition, we consider sparse NN-AGP to alleviate the computational burden for future work; see also a discussion in Section 9 of the supplements.
$\textbf{References:}$
$[1]$ Weiss, K., Khoshgoftaar, T.M. and Wang, D., 2016. A survey of transfer learning. Journal of Big data, 3(1), pp.1-40.
$[2]$ Wei, J., He, J., Chen, K., Zhou, Y. and Tang, Z., 2017. Collaborative filtering and deep learning based recommendation system for cold start items. Expert Systems with Applications, 69, pp.29-39.
$[3]$ Wolfe, C.R. and Kyrillidis, A., 2022. Cold Start Streaming Learning for Deep Networks.arXiv preprint arXiv:2211.04624.
---
Rebuttal Comment 1.1:
Title: My questions have been addressed
Comment: I thank the authors for providing additional evidences to further support their claims and resolve the concerns. My questions have been addressed accordingly. The second weakness pointed out can be removed. I will keep my score as 6: Weak Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your time and comments. In the revised version, we will include a detailed discussion on computation cost and prediction latency as suggested. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their review and for providing us with valuable comments to improve our work. We have treated every comment to our best efforts. In addition to the point-to-point responses to each of the reviewer, we also attach a document in this global response, containing 1) a table recording training/ execution time and 2) a figure recording the average regrets in a transfer learning setting.
We record 1) the mean of training time that constructs the surrogate model based on the historical data and 2) the mean of execution time that selects the decision variable after the contextual variable is revealed. We record the time (seconds) for exactly one round in the 50-th, 100-th, and 300-th rounds. We take the first set of experiments in Section 4.1 as an example and present the results in $\textbf{Table 1}$ in the attached document.
We also include the experimental results of transfer learning with NN-AGP-UCB in $\textbf{Figure 1}$ of the attached document. Specifically, we consider an unknown reward function $f_{T}$ and we also have access to unknown functions $f_{s},s=1,2,\ldots,5$ that have a similar structure with $f_T$. We first sample each $f_s$ for 50 or 100 rounds and learn an NN-AGP model with these samples. The NN component in NN-AGP helps to transfer the knowledge from $f_s$ to $f_T$. That is, during the initial rounds of NN-AGP-UCB with $f_T$, we first fix the input layer of the pretrained NN and update the remaining layers with the new data, which is a widely-used transfer learning method named freezing.
Pdf: /pdf/df173a87d90a1ef7fda796b39994aeff0d8b6655.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Diversify \& Conquer: Outcome-directed Curriculum RL via Out-of-Distribution Disagreement | Accept (poster) | Summary: This paper introduces a novel curriculum learning algorithm (D2C) in the context of reinforcement learning. Unlike previous approaches, D2C does not rely on prior knowledge of the environment's structure, including the distance measure between states. Instead, it leverages a goal-conditioned binary classifier to differentiate between visited states and desired states. When learning this classifier, D2C additionally incorporates a diversification objective to encourage the classifier to disagree on unvisited states. The paper claims that D2C could automatically generate a diverse set of curriculum goals. These goals, in turn, provide intrinsic rewards that guide the agent towards achieving the desired goal state.
Strengths: 1. The technique introduced in this paper that identify the similarities between the explored states and desired outcomes with a diversification objective appears to be novel within the domain of curriculum RL.
2. Empirical evaluation of the proposed methods in the selected maze and robotics manipulation environment shows the significant improvement of the proposed algorithm.
Weaknesses: 1. The D2C algorithm relies on having access to a dataset of unlabeled target data in order to model unvisited states. However, this assumption can be a significant limitation of the algorithm. In the experiments conducted in this paper, the author was able to overcome this limitation by uniformly sampling states from the upper and lower bounds of the state space. Nevertheless, obtaining such a collection of (legal) states within the state space of agents or robots is often a challenging task. For instance, if the agent receives pixel observations or if the legal state space of a robotic arm is a constrained subset of the entire space defined by the cartesian product each dimension, the assumption of sampling from the legal state space to obtain the unlabeled target dataset may not be applicable. In general, this assumption holds true only in specific environments or tasks.
2. The current version of the proposed algorithm has some fundamental limitations. For example, the proposed algorithm could be prone to over-explore. Also, how the diversification process helps the overall curriculum proposal process is not clear to me. Please refer to my questiona in the next section.
I am happy to raise the score if my questions are addressed by the author.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Regarding the diversification: It seems that when optimizing the objective of curriculum distribution, the uncertainty/disagreement between different classification heads are not in the objective. Equation 4 simply minimizes the cross entropy between the averaged probability of the proposed curriculum state being a goal state and the average probability of the true goal states. Where does the uncertainty of the unexplored region play a role in the overall objective of the algorithm?
2. The problem of over-exploration: Based on the learning objective of D2C, suppose the agent is given a maze environment similar to the one shown in Figure 2. But if the goal distribution only includes the top one, then is it true that this proposed algorithm would still explore both the top and bottom part of the maze?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses:**
1. The D2C algorithm relies on ...
→ We appreciate your insightful comment about the extension to the high-dimensional spaces. As you have rightly pointed out, directly applying the bipartite matching problem (section 4.3) in high-dimensional spaces may not work. However, we can consider an encoder-decoder structure with ELBO loss-based representation learning techniques such as VAE to abstract high-dimensional inputs to low-dimensional latent vectors $z$ and measure the Euclidean distance between $z$ of the querying image and goal image similar to [1]. We think these ideas may extend our bidirectional curriculum generation method to high-dimensional spaces and not be restricted to low-dimensional spaces.
Also, for state inputs in the diversified conditional classifiers, we used the mapping $\phi(\cdot)$ that abstracts the state space into the goal space as described in Appendix A (e.g. $\phi(\cdot)$ abstracts the global xyz position of the object in robot manipulation task). It enables us to construct simple and legal state space within the lower and upper bound in most cases. Since knowing each state element’s meaning is a commonly utilized assumption in RL (e.g. hindsight relabeling technique), we think this is not a significant limitation.
Also, as you may concern, there could be infeasible state space within the lower and upper bound. For instance, we can consider the existence of the obstacle at the center of the end-effector’s feasible range, or conflict between the end-effector’s position obtained by applying forward kinematics to the (uniform) randomly sampled joint position and the end-effector’s position (if joint states are included in states). However, the D2C classifiers disagree even in these infeasible states because these are not included both the desired outcome examples and explored states in the replay buffer because the agent cannot reach these states. Since we obtain the curriculum goals from the replay buffer, infeasible states cannot be proposed as curriculum goals, and the agent will keep explore toward feasible, unexplored areas based on the disagreement between classifiers, and discover the desired outcome states.
If you still have remaining concerns or if we misunderstood your question, please let us know. We would be happy to discuss this.
[1] Nair, Ashvin V., et al. "Visual reinforcement learning with imagined goals." *Advances in neural information processing systems*
31 (2018).
2. The current version of the proposed ...
→ Question section
**Questions:**
1. Regarding the diversification ...
→ Thank you for your comment about the diversification. The pseudo probability in Eq (3) represents how similar the queried state is to the explored states or desired outcome states (lines 178-188), and disagreement between classification heads is represented as intermediate values between the 0 (explored states) and 1 (desired outcome states) when optimizing the curriculum loss in Eq (4).
For example, before discovering the desired outcome states, the batch data sampled from the replay buffer in Eq (4) include a diverse range of states from frequently visited states to rarely visited ones such as the states in the frontier of the explored region. As described in lines 178-188, $P_{pseudo}(y=1|g^+)$ is nearly 1 and the loss will be large for the frequently visited states as all classifiers correctly classify them as a label 0 (i.e. $P_{pseudo}(y=1|s)=0$), and the loss will be small for the states in the frontier since the classifiers disagree due to the diversification (i.e. $P_{pseudo}(y=1|s)$ is larger than 0). Thus, the curriculum objective encourages selecting the states in the frontier region.
After discovering the desired outcome states, the curriculum objective encourages selecting the states close to the desired outcome state since the corresponding loss will be the smallest among the losses of states in the replay buffer.
For clarity, we would like to note that cross-entropy in Eq (4) is used as a convex function that decreases as $P_{pseudo}(y=1|s)$ increases, not as an exact entropy minimization to match the probability distribution (because $P_{pseudo}$ in Eq (3) is not mathematically probability, so we coined it with “pseudo”).
We believe that these responses together adequately address this comment. However, if you still have remaining concerns or questions please let us know.
2. The problem of over-exploration ...
→ Thank you for your insightful comment. If the goal distribution only includes the top one, then D2C will propose curriculum goals in both directions at the initial phase, since the frontier states of both ways will have the pseudo probability of 0.5 (for example, in the case of two classification heads). However, as the agent discovers the goal distribution at the top, the curriculum goals will be converged to the distribution at the top since states in these distributions will have the pseudo probability of 1, and the agent does not explore toward the other direction anymore.
We would like to note that we assume a setting where only some desired outcome examples are given before training begins and access to the desired outcome distribution is unavailable. Therefore, the setting where the goal distribution only includes the top one means we want the agent to reach the top one only. Thus, after discovering the top one, no more exploration toward the other direction is a natural consequence. If we want to explore the other direction also, it means there exist desired outcomes in the other direction, so we should be equipped with the desired outcome examples corresponding to the bottom one before training begins. The detailed motivation of this outcome/example-driven method is well summarized in prior works such as [1].
[1] Fu, Justin, et al. "Variational inverse control with events: A general framework for data-driven reward definition." *Advances in neural information processing systems* 31 (2018).
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you for the detailed response. I've examined other reviews and all of your replies closely. Nonetheless, I feel my concerns haven't been sufficiently addressed and so I will keep my original score.
My doubts regarding the strong assumptions of D2C persist, specifically concerning the access to a collection of unlabeled target data/states. It's challenging to envision how this method can scale to pixel observations. For instance, in the sawyer push environment, if the problem is being tackled solely from pixel observations, how would one amass such a target dataset without prior knowledge of the task?
Furthermore, I'm skeptical about the real benefits of borrowing the idea of diversification & disambiguation from [1] to RL in additional to this extra assumption. The goal in the original paper is to learn diverse hypotheses so that at least one learned hypothesis/network does not depend on spurious features. However, in the settings we have here, the goal is to encourage the agent to explore unseen states and reach the goal states in the end, which is of a completely different purpose. As pointed out by the reviewer geTA, there are many existing methods proposed as exploration methods for RL, such as RND [2], Plan2Explore [3], Plan2Predict [4], and so on. Although the author comments on the problem of over-exploration induced by these methods, I don't think I would agree with the argument. As long as a large external/sparse reward is assigned for goal achievement, the agent will secure a high reward upon reaching this state. It will learn from the reward feedback and know how to reach the goal state. Hence, I'm afraid I have to disagree with the author's noted limitation, and I am doubtful of the benefits of incorporating diversification & disambiguation into this setting.
Reference:
[1] Lee et al. Diversify and Disambiguate: Learning From Underspecified Data
[2] Burda et al. Exploration by Random Network Distillation.
[3] Sekar et al. Planning to Explore via Self-Supervised World Models
[4] Wu et al. Plan To Predict: Learning an Uncertainty-Foreseeing Model for Model-Based Reinforcement Learning
---
Reply to Comment 1.1.1:
Comment: Thank you for your detailed response. As described in the Limitation section, our work in a current form might not be applicable in the pixel observation setting, as you rightly pointed out.
To alleviate it, we can consider a setting where the agent additionally explores via random action after reaching the proposed goal image and the classifiers do not utilize the random action-based transition data as label 0 data. If we treat all the images in the buffer, desired outcome examples, and transitions obtained via random action as unlabeled target data, then the classifiers will disagree only on the images in randomly explored areas, which enable us to query unseen, frontier sites and propose curriculum goal image at the frontier of the explored region. Although it depends on the random exploration strategies' effectiveness, this could be one of the possible options for the extension of our work to address the pixel-based setting.
Furthermore, we would like to note that most of the prior works (e.g. OUTPACE, HGG, CURROT, GoalGAN, etc) that explicitly propose curriculum are also not applicable to the pixel-based setting since these have assumptions/limitations similar to our work or cannot propose the curriculum goal image that corresponds to a completely novel state even with the assistance of generative models.
Also, as described in the response and manuscript, our problem setting is an example/outcome-directed RL where access to the ground truth (external) reward function is unavailable. Since the referred works [2,3,4] can be used only when we can explicitly access the reward function itself or implicitly access this through environmental interaction, our method is completely different from the RL with external+intrinsic reward setting in [2,3,4]. Therefore, in addition to the benefits of our method described in response to the reviewer geTA, we believe that directly contrasting our work with the prior works that require access to the ground truth reward function is neither necessary nor appropriate.
Of course, we acknowledge that conducting experiments with pixel-based scenarios would enhance the quality of our work. Nevertheless, it is clear that addressing the “without external reward” framework itself is not only necessary but also increasingly crucial to effectively address RL problems that closely resemble real-world scenarios where access to the ground truth reward function is unavailable. In this regard, we wish to emphasize that D2C has contributions in terms of providing a foundation for addressing the example/outcome-directed RL approach with curriculum learning. | Summary: This paper proposes a new reinforcement learning algorithm called Diversify for Disagreement & Conquer (D2C).
The idea of the algorithm is to divide the search space and let conditional classifiers explore these subspaces.
With a set of experiments, the authors verify the effectiveness of D2C compared to various other approaches.
As new to the field, I can not judge the relevance of the contribution for reinforcement learning.
Strengths: As a person new to the field of RF, this paper is well written since it introduces necessary concepts at an appropriate depth, and the idea and motivation are intuitive.
In addition, the approach is geometric flexible, shows its effectiveness in a range of experiments, and the author provides an in-depth ablation study of relevant hyperparameters.
Weaknesses: Again I need to admit that I am entirely new to this field. At the same time, the other baseline experiments all do explore the search undirected. Is it a fair comparison when your approach can explore many directions situationally?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See above
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses:**
1. Again I need to admit that I am entirely new to this field. At the same time, the other baseline experiments all do explore the search undirected. Is it a fair comparison when your approach can explore many directions situationally?
→ First of all, it is slightly unclear the meaning of “situationally”, so if we misunderstood your question and the answer is not appropriate, please let us know.
We would like to note that most of the baselines and our method perform undirected exploration until the agent discovers the desired outcome states. Only HGG and CURROT perform directed exploration from the initial phase. Some baselines keep doing undirected exploration even after the agent discovers the desired outcome states (e.g. VDS, ALP-GMM, PLR), and the other baselines and our method propose converged curriculum goals after discovering the desired outcome states (e.g. HGG, CURROT, OUTPACE). It is summarized in Table 1 (column: Target dist. of curriculum). Therefore, we think it is a fair comparison since all algorithms do an undirected search by their own strategies before discovering the desired outcome states.
---
Rebuttal Comment 1.1:
Title: please acknowledge author response
Comment: dear reviewer
could you please let us know if you read the author response and if the response or the discussions here change your assessment?
thanks
AC | Summary: This paper propose training multiple classifiers on a data set of "desired" and "reached but undesired" states, motivating these classifiers to be different off distribution, and using this divergence as a metric for exploration.
Strengths: The approach is simple and easy to implement in many domains.
Weaknesses: Since the approach is very similar to model-disagreement, I would expect a simple comparison to such an approach like exploration via disagreement. Moreover, given that random-network distillation works so well for exploration, its unclear if the objective of the classifier here is really too important. It may only be important that the metric has to do with the disagreement between two networks, and it may not matter what they are disagreeing about.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: What advantage does this approach have over generic model-disagreement, or random network distillation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Given the number of existing approaches, and an unclear understanding of their relative limitations, it is unclear when this approach should be preferred over the others in a new domain.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses:**
1. Since the approach is very similar to model-disagreement, I would expect a simple comparison to such an approach like exploration via disagreement. Moreover, given that random-network distillation works so well for exploration, its unclear if the objective of the classifier here is really too important. It may only be important that the metric has to do with the disagreement between two networks, and it may not matter what they are disagreeing about.
→ Thank you for your insightful comment. As you rightly pointed out, it is similar to model-disagreement in terms of encouraging the agent to explore uncertain areas. VDS, which utilizes the epistemic uncertainties of the value function ensembles to propose curriculum goal, is included as the baseline in the experiment section for the model-disagreement-based method. Also, Random Network Distillation (RND) encourages exploration by defining intrinsic reward based on the prediction error of the network rather than explicitly proposing the curriculum goal. However, VDS and RND do not have a convergence mechanism to the desired outcome states. That is, they just encourage endless exploration to find the novel states even after discovering the desired outcomes, which is not desirable for goal-reaching settings.
Even though we can imagine the combination between external reward given from the environmental interaction and the RND's intrinsic reward, it still requires hyperparameter tuning for balancing the external (exploitation) and intrinsic reward (exploration). Also, since these methods do not consider multi-modal desired outcome distribution, they are prone to exploitation. That is, the agent does not explore other desired outcome states once it found a specific desired outcome distribution earlier. However, our proposed method keeps exploring when the agent is in the same setting because the curriculum cost for the undiscovered desired outcome state is not minimized yet. Furthermore, our method provides integrated methodology not only for the desired outcome-directed curriculum generation but also for the goal-conditioned intrinsic reward compared to VDS, and RND that require an assumption of access to the external reward.
For the last concern, we would like to note that it matters what the classifiers are disagreeing. If the classifiers disagree on already explored states, efficient exploration toward the frontier is unavailable since the agent tries to reach the proposed curriculum goals in the already explored regions. Also, if the classifiers disagree on desired outcome states, the proposed curriculum goals will not be converged to the desired outcomes. It not only hinders the acceleration of the curriculum proposal toward the desired outcomes after discovering these but also prevents the agent from repeatedly practicing to reach the desired outcomes. Since we assume a setting where only some desired outcome examples are given before training begins and access to the desired outcome distribution and external reward are unavailable, convergence to the desired outcome states is required. Therefore, we think it matters that the classifiers disagree only on the unexplored states.
We believe that these responses together adequately address this comment. However, if you still have remaining concerns or questions please let us know.
**Questions:**
1. What advantage does this approach have over generic model-disagreement, or random network distillation?
→ Weakness section
**Limitations:**
1. Given the number of existing approaches, and an unclear understanding of their relative limitations, it is unclear when this approach should be preferred over the others in a new domain.
→ Thank you for your comment. We added more details of the baseline as follows. (Also, there are conceptual comparisons between D2C and baselines in Table 1 and brief explanations are in lines 249-259. A quick review of these would be helpful to clarify the pros and cons of each different approach.)
HGG and CURROT utilize Euclidean distance metric, they are prone to get stuck in an obstacle in an environment such as Maze, while our method D2C does not have such a problem.
PLR and ALP-GMM leverage the regret or learning progress, which implicitly indicates the novelty or difficulty of the proposed curriculum. But these methods do not have a convergence mechanism to the desired outcome distribution, which induces endless exploration rather than achieving the given task. Furthermore, ALP-GMM depends on the Gaussian Mixture Model (GMM) that is susceptible to focusing on infeasible goals where the agent’s capability stagnates in the intermediate level of difficulty.
OUTPACE is similar to our approach, but it requires Wasserstein distance-based temporal distance estimation for curriculum proposal, which can result in collapsed curriculum goals as described in lines 267-272. Also, for uncertainty quantification, it adopts meta-learning (MAML [1]) that requires gradient computation at every optimization iteration, while our method only requires a single neural network inference, leading to much faster curriculum optimization.
Overall, the proposed method, D2C, has several advantages compared to other curriculum RL baselines, and we think it could be preferred over others in general.
[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. "Model-agnostic meta-learning for fast adaptation of deep networks." *International conference on machine learning*. PMLR, 2017.
---
Rebuttal Comment 1.1:
Title: please acknowledge author response
Comment: dear reviewer
could you please let us know if you read the author response and if the response or the discussions here change your assessment?
thanks
AC
---
Rebuttal Comment 1.2:
Title: Response to Authors
Comment: I thank the authors for their detailed response.
There are two critical arguments about the applicability of this approach which were brought up in the rebuttal but were not made clear in the original paper. I think it is quite important for the paper to be reframed to make these points clear as these two points are make-or-break for a reader understanding the usefulness of the method.
> However, VDS and RND do not have a convergence mechanism to the desired outcome states
This is framing the "convergence to the desired outcome states" as a key criteria of what you would want out of a curriculum approach, which is indeed an important criteria the vast majority of other approaches overlook. Making it it clear why this criteria is not just nice but absolutely necessary and making it a centerpiece of the work, embracing it as a key strength, would really strengthen the work.
> Nevertheless, it is clear that addressing the “without external reward” framework itself is not only necessary but also increasingly crucial to effectively address RL problems that closely resemble real-world scenarios where access to the ground truth reward function is unavailable.
This is framing the approach as not only tackling the curriculum problem but also the reward specification problem. This should also be embraced as a strength. If readers feel like it is just a "nice to have" criteria, then the domain-specific work from the designer seems hard to justify. Ideally, you would show this by having a domain where you do not have a reward function and it is hard to specify one, but you can get the agent to perform the task anyway using your approach (doing a backflip in mujoco for instance, or other tasks in the DRL from human preferences space)
Without these re-framings, the choice made by the method to require the designer to provide data of desired states appear to be a severe limitation for a curriculum method. While other curriculum methods are complete drop-in to a new environment, this one requires domain-specific work from the designer, which most RL researchers would rather avoid.
With these framings, the choice made to use designer provided data is obviously necessary, and worth it because 1) providing a few examples is easier than providing a reward function, and 2) given that the designer has to specify something anyway, may as well specify these examples rather than a reward function because they also allow better curricula.
Thus it is very critical that the readers understand these two arguments from the paper. I suggest that these arguments be placed prominently in some form in the abstract/introduction so that they can not be easily overlooked.
Given that what I had previously seen as weaknesses of this work I now see as strengths, I will be raising my score to a 6
---
Reply to Comment 1.2.1:
Comment: We sincerely appreciate your valuable insights and feedback. After conducting a thorough examination of our manuscript in light of the two pivotal points you highlighted, we found that these two arguments have been mentioned but their significance and implications seem to have been somewhat less emphasized. In line with your feedback, we are committed to revising the manuscript to better emphasize the core concept and benefits of the proposed method, thereby ensuring enhanced reader understanding. It would be really helpful to refine the quality of our work. Thank you for your constructive response. | Summary: Focusing on goal-conditioned RL, the author propose Diversify for Disagreement & Conquer (D2C) to form outcome-directed exploration by generating a sequence of curriculum goals, given some desired outcome examples. By ensuring multiple classifiers disagree on unseen states, D2C uses bipartite matching to create curriculum goals, which are interpolated between the initial state distribution and arbitrarily distributed desired outcome states for enabling the agent to conquer the unexplored region. The experiments demonstrate that D2C surpasses previous curriculum RL methods in goal-conditioned RL experiments.
Strengths: - I appreciate the idea to use the diversifying functions for disagreement on underspecified data and to quantify the similarity between the visited states and desired outcome states. To use the pseudo probability as the intrinsic reward is also straightforward. The proposed method is simple but effective.
- The paper is well written.
Weaknesses: - It is not clear for me how to interpolate the curriculum distribution from the initial state distribution through minimizing Eq 4.
- Some related work and its comparison missing. Diversity has been considered in Curriculum RL. For example, Curriculum-guided Hindsight Experience Replay (NeurIPS 2019).
- It is not open sourced.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Please clarify the interpolation process as referenced in weakness.
- Figure 6 can be better if the labels do not cover the curves. The font sizes in the figures are too small to read.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses:**
1. It is not clear for me how to interpolate the curriculum distribution from the initial state distribution through minimizing Eq 4.
→ Question section.
2. Some related work and its comparison missing. Diversity has been considered in Curriculum RL. For example, Curriculum-guided Hindsight Experience Replay (NeurIPS 2019).
→ Thank you for your suggestion about the related work. We had a quick check of the CHER paper, and there exist some related keywords, so we will add this work in the revised version. But, in terms of methodology, there are several key differences between CHER and our method.
First, CHER is about goal relabeling technique like HER during the RL update phase rather than proposing curriculum goal during the rollout phase. That is, CHER is appropriate for an add-on-module on our method rather than a direct comparison since our method is also based on the HER as described in Algorithm 1 in Appendix B.
Second, as far as we understand CHER, the distance metric in the proximity measurement is based on the Euclidean distance metric, which is not appropriate for the geometry-agnostic curriculum proposal property. Also, diversity in CHER means how representative sampled curriculum candidates are with respect to the states in the replay buffer. However, diversity in D2C induces disagreement between the classifiers for the unlabeled target data, and it is reflected in curriculum optimization cost to encourage the algorithm to propose curriculum goals into the unexplored region. Thus, the same keyword “diversity” is somewhat different between our work and CHER.
Due to the short period of the rebuttal phase, we do not have enough time to experiment with CHER. But, based on the results in the CHER paper, we think that CHER has contributed to performance increases compared to RL+HER rather than making the algorithm work in an environment where RL+HER completely fails. Since most of the tasks used in D2C cannot be solved with RL+HER without additional tools such as curriculum proposal (based on our own experimental results), we expect that CHER will not bring significant improvement compared to RL+HER in the tasks used in D2C.
3. It is not open sourced.
→ The source code is already included in the supplementary material. Also, the code will be open-sourced after the decision.
**Questions:**
1. Please clarify the interpolation process as referenced in weakness.
→Thank you for your comment. The pseudo probability in Eq (3) represents how similar the queried state is to the explored states or desired outcome states (lines 178-188), and disagreement between classification heads is represented as intermediate values between the 0 (explored states) and 1 (desired outcome states) when optimizing the curriculum loss in Eq (4).
For example, before discovering the desired outcome states, the batch data sampled from the replay buffer in Eq (4) include a diverse range of states from frequently visited states to rarely visited ones such as the states in the frontier of the explored region. As described in lines 178-188, $P_{pseudo}(y=1|g^+)$ is nearly 1 and the loss will be large for the frequently visited states as all classifiers correctly classify them as a label 0 (i.e. $P_{pseudo}(y=1|s)=0$), and the loss will be small for the states in the frontier since the classifiers disagree due to the diversification (i.e. $P_{pseudo}(y=1|s)$ is larger than 0). Thus, the curriculum objective encourages selecting the states in the frontier region.
After discovering the desired outcome states, the curriculum objective encourages selecting the states close to the desired outcome state since the corresponding loss will be the smallest among the losses of states in the replay buffer.
In the case of multi-modal desired outcome distribution as described in Fig 2 (two desired outcome examples, K=2 in Eq (6)), the conditional classifier enables non-collapsed curriculum proposals even when the agent achieves a specific desired outcome earlier, since we utilize bipartite matching (i.e. no duplicative selection) to find the curriculum goal candidates in the replay buffer.
For clarity, we would like to note that cross-entropy in Eq (4) is used as a convex function that decreases as $P_{pseudo}(y=1|s)$ increases, not as an exact entropy minimization to match the probability distribution (because $P_{pseudo}$ in Eq (3) is not mathematically probability, so we coined it with “pseudo”).
2. Figure 6 can be better if the labels do not cover the curves. The font sizes in the figures are too small to read.
→ Thank you for your suggestion. We modified the location of the labels and font size of Figure 6, and we attached the pdf with modified figures of experimental results at the global response. Due to the single-page limit, current figures are a little bit small. After the decision, we will further modify the figures with better visibility and readability on the additional page. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for reviewing our work and providing constructive feedback. We hope that our response has adequately addressed your comments. If you have any remaining questions (existing or new ones) that we can address in our follow-up response to improve your opinion about our work, please do not hesitate to provide additional feedback in the comments. It would be greatly appreciated if we could have more discussions about our work which would provide valuable insights towards further developing our research into a meaningful contribution in the RL domain.
Also, there is an ask for figures with large font sizes, so we attached modified figures in the pdf file.
Pdf: /pdf/a6fadb271ef32c473313cc68044ed28d79bdb195.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposed Diversify for Disagreement & Conquer (D2C), which is an outcome-directed curriculum RL method. D2C performs diversification of the goal-conditional classifiers to identify similarities between visited and desired outcome states and ensures that the classifiers disagree on states from out-of-distribution, which enables quantifying the unexplored region and designing an arbitrary goal-conditioned intrinsic reward signal in a simple and intuitive way. D2C then employs bipartite matching to define a curriculum learning objective that produces a sequence of well-adjusted intermediate goals, which enable the agent to automatically explore and conquer the unexplored region.
Strengths: D2C performs a classifier diversification process to distinguish the unexplored region from the explored area and desired outcome example.
D2C conquers the unexplored region by proposing the curriculum goal and the shaped intrinsic reward.
D2C enables the agent to automatically progress toward the desired outcome states without prior knowledge of the environment.
Weaknesses: 1. The writing of the paper needs some polishing to improve its readability.
2. The settings of the experiments, e.g. the map types and sizes, are not challenging to show the significance of the proposed model.
3. The complexity analysis is missing. Empirically, it takes 1 - 2 days to train a solution for a 36 * 36 map as described in Appendix.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. OUTPACE is the latest model in comparison, but it performs (near) the worst in all tasks (Fig.4 & 5). I had a quick check of OUTPACE paper, and it reports completely different results. Please explain how this could happen.
2. What is the complexity of the proposed model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper has a good summary of its limitations and potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness:**
1. The writing of the paper needs some polishing to improve its readability.
→ Thank you for your suggestion. Since there is no way to revise the current manuscript in the rebuttal phase, we will try to rewrite some phrases and fix some grammatical errors as much as possible after the decision. If you specify some examples, it would be really helpful for us to revise our work with better readability.
2. The settings of the experiments, e.g. the map types and sizes, are not challenging to show the significance of the proposed model.
→ Thank you for your comment. We would like to note that we followed the settings of the experiments in prior curriculum RL works. We think the environments used in this work are already challenging enough to show the significance of our model since it outperforms other prior state-of-the-art works in these environments. In other words, only D2C can achieve all the tasks regardless of their domain, geometry, distribution of the desired outcome states, etc. If you propose a specific environment that is appropriate to show the significance of our model, we will try to experiment and evaluate our method. It would be really helpful to improve our work’s contribution.
3. The complexity analysis is missing. Empirically, it takes 1 - 2 days to train a solution for a 36 * 36 map as described in Appendix.
→ Question section
**Questions:**
1. OUTPACE is the latest model in comparison, but it performs (near) the worst in all tasks (Fig.4 & 5). I had a quick check of OUTPACE paper, and it reports completely different results. Please explain how this could happen.
→ Thank you for your comment about the baseline. As described in lines 267-272, OUTPACE uses Wasserstein distance-based temporal distance estimation to propose curriculum goals into the temporally distant region from the initial state distribution. Once the agent starts to explore a temporally far region in a specific direction earlier, the curriculum proposal is prone to be collapsed toward this area (Figure 3). Thus, there exists only one way to the desired outcome state (i.e. uni-modal desired outcome distribution) in the environments used in OUTPACE to prevent such a problem. Also, the curriculum objective in OUTPACE includes meta-learning-based uncertainty quantification to combine with the temporal distance estimation, which results in extensive hyperparameter tuning for numerical stability and balancing the trade-off between uncertainty and temporal distance. In practice, without such tuning, we found that OUTPACE’s loss frequently blows up. On the contrary, our method simply consists of a single module (i.e. classifiers for disagreement) with cross-entropy loss and mutual information computation, which results in a stable training process compared to the meta-learning process and Wasserstein distance estimation in OUTPACE.
2. What is the complexity of the proposed model?
→ Thank you for your comment about the complexity analysis. Since our method is based on simple MLP-based classifiers with multiple heads, the inference time of the classifier is negligible. The classifier training is performed every 2000~4500 steps as described in Table 3 in Appendix A. Therefore, classifier training time is nearly negligible compared to the SAC’s update which is performed at every step. The curriculum optimization itself requires solving a bipartite matching problem as described in section 4.3, and this is addressed via the Minimum Cost Maximum Flow algorithm, which is implemented as a few lines of c code adopted from prior work [1]. It consists of a double for-loop. The first loop is iterations on a few trajectories sampled from the replay buffer, and the second loop is iterations on the desired goals. In practice, this takes around 10 seconds, but curriculum optimization is also performed every thousands of steps (it depends on the maximum episode horizon of each environment), thus, the overall curriculum optimization time is not a significant bottleneck. We believe that these responses adequately address this comment about the complexity. However, if you still have remaining concerns or questions please let us know.
[1] Ren, Zhizhou, et al. "Exploration via hindsight goal generation." *Advances in Neural Information Processing Systems* 32 (2019).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. I appreciate their efforts and their response did help demystify some aspects of the paper. But I still honestly feel that the paper needs some important improvement. So I'll keep my current overall recommendation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We appreciate you engaging in the discussion.
We believe we have made sincere efforts to address the issues and clarified some concepts that may have been misunderstood by the reviewers due to our previous unclear description. We politely request the reviewers to provide more clear and specific suggestions for important improvement if possible. We sincerely appreciate their valuable assistance in improving the quality of our work. | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.